fenic.api
Query module for semantic operations on DataFrames.
Classes:
-
AnthropicLanguageModel
–Configuration for Anthropic language models.
-
Catalog
–Entry point for catalog operations.
-
CloudConfig
–Configuration for cloud-based execution.
-
CohereEmbeddingModel
–Configuration for Cohere embedding models.
-
Column
–A column expression in a DataFrame.
-
DataFrame
–A data collection organized into named columns.
-
DataFrameReader
–Interface used to load a DataFrame from external storage systems.
-
DataFrameWriter
–Interface used to write a DataFrame to external storage systems.
-
GoogleDeveloperEmbeddingModel
–Configuration for Google Developer embedding models.
-
GoogleDeveloperLanguageModel
–Configuration for Gemini models accessible through Google Developer AI Studio.
-
GoogleVertexEmbeddingModel
–Configuration for Google Vertex AI embedding models.
-
GoogleVertexLanguageModel
–Configuration for Google Vertex AI models.
-
GroupedData
–Methods for aggregations on a grouped DataFrame.
-
Lineage
–Query interface for tracing data lineage through a query plan.
-
OpenAIEmbeddingModel
–Configuration for OpenAI embedding models.
-
OpenAILanguageModel
–Configuration for OpenAI language models.
-
SemanticConfig
–Configuration for semantic language and embedding models.
-
SemanticExtensions
–A namespace for semantic dataframe operators.
-
Session
–The entry point to programming with the DataFrame API. Similar to PySpark's SparkSession.
-
SessionConfig
–Configuration for a user session.
Functions:
-
array
–Creates a new array column from multiple input columns.
-
array_agg
–Alias for collect_list().
-
array_contains
–Checks if array column contains a specific value.
-
array_size
–Returns the number of elements in an array column.
-
asc
–Mark this column for ascending sort order with nulls first.
-
asc_nulls_first
–Alias for asc().
-
asc_nulls_last
–Mark this column for ascending sort order with nulls last.
-
avg
–Aggregate function: returns the average (mean) of all values in the specified column. Applies to numeric and embedding types.
-
coalesce
–Returns the first non-null value from the given columns for each row.
-
col
–Creates a Column expression referencing a column in the DataFrame.
-
collect_list
–Aggregate function: collects all values from the specified column into a list.
-
count
–Aggregate function: returns the count of non-null values in the specified column.
-
desc
–Mark this column for descending sort order with nulls first.
-
desc_nulls_first
–Alias for desc().
-
desc_nulls_last
–Mark this column for descending sort order with nulls last.
-
first
–Aggregate function: returns the first non-null value in the specified column.
-
greatest
–Returns the greatest value from the given columns for each row.
-
least
–Returns the least value from the given columns for each row.
-
lit
–Creates a Column expression representing a literal value.
-
max
–Aggregate function: returns the maximum value in the specified column.
-
mean
–Aggregate function: returns the mean (average) of all values in the specified column.
-
min
–Aggregate function: returns the minimum value in the specified column.
-
stddev
–Aggregate function: returns the sample standard deviation of the specified column.
-
struct
–Creates a new struct column from multiple input columns.
-
sum
–Aggregate function: returns the sum of all values in the specified column.
-
udf
–A decorator or function for creating user-defined functions (UDFs) that can be applied to DataFrame rows.
-
when
–Evaluates a condition and returns a value if true.
AnthropicLanguageModel
Bases: BaseModel
Configuration for Anthropic language models.
This class defines the configuration settings for Anthropic language models, including model selection and separate rate limiting parameters for input and output tokens.
Attributes:
-
model_name
(AnthropicLanguageModelName
) –The name of the Anthropic model to use.
-
rpm
(int
) –Requests per minute limit; must be greater than 0.
-
input_tpm
(int
) –Input tokens per minute limit; must be greater than 0.
-
output_tpm
(int
) –Output tokens per minute limit; must be greater than 0.
-
profiles
(Optional[dict[str, Profile]]
) –Optional mapping of profile names to profile configurations.
-
default_profile
(Optional[str]
) –The name of the default profile to use if profiles are configured.
Example
Configuring an Anthropic model with separate input/output rate limits:
config = AnthropicLanguageModel(
model_name="claude-3-5-haiku-latest",
rpm=100,
input_tpm=100,
output_tpm=100
)
Configuring an Anthropic model with profiles:
config = SessionConfig(
semantic=SemanticConfig(
language_models={
"claude": AnthropicLanguageModel(
model_name="claude-opus-4-0",
rpm=100,
input_tpm=100,
output_tpm=100,
profiles={
"thinking_disabled": AnthropicLanguageModel.Profile(),
"fast": AnthropicLanguageModel.Profile(thinking_token_budget=1024),
"thorough": AnthropicLanguageModel.Profile(thinking_token_budget=4096)
},
default_profile="fast"
)
},
default_language_model="claude"
)
# Using the default "fast" profile for the "claude" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias="claude")
# Using the "thorough" profile for the "claude" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias=ModelAlias(name="claude", profile="thorough"))
Classes:
-
Profile
–Anthropic-specific profile configurations.
Profile
Bases: BaseModel
Anthropic-specific profile configurations.
This class defines profile configurations for Anthropic models, allowing different thinking token budget settings to be applied to the same model.
Attributes:
-
thinking_token_budget
(Optional[int]
) –If configuring a model that supports reasoning, provide a default thinking budget in tokens. If not provided, thinking will be disabled for the profile. The minimum token budget supported by Anthropic is 1024 tokens.
Note
If thinking_token_budget
is set, temperature
cannot be customized -- any changes to temperature
will be ignored.
Example
Configuring a profile with a thinking budget:
profile = AnthropicLanguageModel.Profile(thinking_token_budget=2048)
Configuring a profile with a large thinking budget:
profile = AnthropicLanguageModel.Profile(thinking_token_budget=8192)
Catalog
Catalog(catalog: BaseCatalog)
Entry point for catalog operations.
The Catalog provides methods to interact with and manage database tables, including listing available tables, describing table schemas, and dropping tables.
Basic usage
# Create a new catalog
session.catalog.create_catalog('my_catalog')
# Returns: True
# Set the current catalog
session.catalog.set_current_catalog('my_catalog')
# Returns: None
# Create a new database
session.catalog.create_database('my_database')
# Returns: True
# Use the new database
session.catalog.set_current_database('my_database')
# Returns: None
# Create a new table
session.catalog.create_table('my_table', Schema([
ColumnField('id', IntegerType),
]))
# Returns: True
Initialize a Catalog instance.
Parameters:
-
catalog
(BaseCatalog
) –The underlying catalog implementation.
Methods:
-
create_catalog
–Creates a new catalog.
-
create_database
–Creates a new database.
-
create_table
–Creates a new table.
-
describe_table
–Returns the schema of the specified table.
-
does_catalog_exist
–Checks if a catalog with the specified name exists.
-
does_database_exist
–Checks if a database with the specified name exists.
-
does_table_exist
–Checks if a table with the specified name exists.
-
does_view_exist
–Checks if a view with the specified name exists.
-
drop_catalog
–Drops a catalog.
-
drop_database
–Drops a database.
-
drop_table
–Drops the specified table.
-
drop_view
–Drops the specified view.
-
get_current_catalog
–Returns the name of the current catalog.
-
get_current_database
–Returns the name of the current database in the current catalog.
-
list_catalogs
–Returns a list of available catalogs.
-
list_databases
–Returns a list of databases in the current catalog.
-
list_tables
–Returns a list of tables stored in the current database.
-
list_views
–Returns a list of views stored in the current database.
-
set_current_catalog
–Sets the current catalog.
-
set_current_database
–Sets the current database.
Source code in src/fenic/api/catalog.py
43 44 45 46 47 48 49 |
|
create_catalog
create_catalog(catalog_name: str, ignore_if_exists: bool = True) -> bool
Creates a new catalog.
Parameters:
-
catalog_name
(str
) –Name of the catalog to create.
-
ignore_if_exists
(bool
, default:True
) –If True, return False when the catalog already exists. If False, raise an error when the catalog already exists. Defaults to True.
Raises:
-
CatalogAlreadyExistsError
–If the catalog already exists and ignore_if_exists is False.
Returns:
-
bool
(bool
) –True if the catalog was created successfully, False if the catalog
-
bool
–already exists and ignore_if_exists is True.
Create a new catalog
# Create a new catalog named 'my_catalog'
session.catalog.create_catalog('my_catalog')
# Returns: True
Create an existing catalog with ignore_if_exists
# Try to create an existing catalog with ignore_if_exists=True
session.catalog.create_catalog('my_catalog', ignore_if_exists=True)
# Returns: False
Create an existing catalog without ignore_if_exists
# Try to create an existing catalog with ignore_if_exists=False
session.catalog.create_catalog('my_catalog', ignore_if_exists=False)
# Raises: CatalogAlreadyExistsError
Source code in src/fenic/api/catalog.py
120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 |
|
create_database
create_database(database_name: str, ignore_if_exists: bool = True) -> bool
Creates a new database.
Parameters:
-
database_name
(str
) –Fully qualified or relative database name to create.
-
ignore_if_exists
(bool
, default:True
) –If True, return False when the database already exists. If False, raise an error when the database already exists. Defaults to True.
Raises:
-
DatabaseAlreadyExistsError
–If the database already exists and ignore_if_exists is False.
Returns:
-
bool
(bool
) –True if the database was created successfully, False if the database
-
bool
–already exists and ignore_if_exists is True.
Create a new database
# Create a new database named 'my_database'
session.catalog.create_database('my_database')
# Returns: True
Create an existing database with ignore_if_exists
# Try to create an existing database with ignore_if_exists=True
session.catalog.create_database('my_database', ignore_if_exists=True)
# Returns: False
Create an existing database without ignore_if_exists
# Try to create an existing database with ignore_if_exists=False
session.catalog.create_database('my_database', ignore_if_exists=False)
# Raises: DatabaseAlreadyExistsError
Source code in src/fenic/api/catalog.py
263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 |
|
create_table
create_table(table_name: str, schema: Schema, ignore_if_exists: bool = True) -> bool
Creates a new table.
Parameters:
-
table_name
(str
) –Fully qualified or relative table name to create.
-
schema
(Schema
) –Schema of the table to create.
-
ignore_if_exists
(bool
, default:True
) –If True, return False when the table already exists. If False, raise an error when the table already exists. Defaults to True.
Returns:
-
bool
(bool
) –True if the table was created successfully, False if the table
-
bool
–already exists and ignore_if_exists is True.
Raises:
-
TableAlreadyExistsError
–If the table already exists and ignore_if_exists is False
Create a new table
# Create a new table with an integer column
session.catalog.create_table('my_table', Schema([
ColumnField('id', IntegerType),
]))
# Returns: True
Create an existing table with ignore_if_exists
# Try to create an existing table with ignore_if_exists=True
session.catalog.create_table('my_table', Schema([
ColumnField('id', IntegerType),
]), ignore_if_exists=True)
# Returns: False
Create an existing table without ignore_if_exists
# Try to create an existing table with ignore_if_exists=False
session.catalog.create_table('my_table', Schema([
ColumnField('id', IntegerType),
]), ignore_if_exists=False)
# Raises: TableAlreadyExistsError
Source code in src/fenic/api/catalog.py
449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 |
|
describe_table
describe_table(table_name: str) -> Schema
Returns the schema of the specified table.
Parameters:
-
table_name
(str
) –Fully qualified or relative table name to describe.
Returns:
-
Schema
(Schema
) –A schema object describing the table's structure with field names and types.
Raises:
-
TableNotFoundError
–If the table doesn't exist.
Describe a table's schema
# For a table created with: CREATE TABLE t1 (id int)
session.catalog.describe_table('t1')
# Returns: Schema([
# ColumnField('id', IntegerType),
# ])
Source code in src/fenic/api/catalog.py
383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 |
|
does_catalog_exist
does_catalog_exist(catalog_name: str) -> bool
Checks if a catalog with the specified name exists.
Parameters:
-
catalog_name
(str
) –Name of the catalog to check.
Returns:
-
bool
(bool
) –True if the catalog exists, False otherwise.
Check if a catalog exists
# Check if 'my_catalog' exists
session.catalog.does_catalog_exist('my_catalog')
# Returns: True
Source code in src/fenic/api/catalog.py
52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
|
does_database_exist
does_database_exist(database_name: str) -> bool
Checks if a database with the specified name exists.
Parameters:
-
database_name
(str
) –Fully qualified or relative database name to check.
Returns:
-
bool
(bool
) –True if the database exists, False otherwise.
Check if a database exists
# Check if 'my_database' exists
session.catalog.does_database_exist('my_database')
# Returns: True
Source code in src/fenic/api/catalog.py
195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 |
|
does_table_exist
does_table_exist(table_name: str) -> bool
Checks if a table with the specified name exists.
Parameters:
-
table_name
(str
) –Fully qualified or relative table name to check.
Returns:
-
bool
(bool
) –True if the table exists, False otherwise.
Check if a table exists
# Check if 'my_table' exists
session.catalog.does_table_exist('my_table')
# Returns: True
Source code in src/fenic/api/catalog.py
346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 |
|
does_view_exist
does_view_exist(view_name: str) -> bool
Checks if a view with the specified name exists.
Parameters:
-
view_name
(str
) –Fully qualified or relative view name to check.
Returns:
-
bool
(bool
) –True if the view exists, False otherwise.
Example
session.catalog.does_view_exist('my_view') True.
Source code in src/fenic/api/catalog.py
513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 |
|
drop_catalog
drop_catalog(catalog_name: str, ignore_if_not_exists: bool = True) -> bool
Drops a catalog.
Parameters:
-
catalog_name
(str
) –Name of the catalog to drop.
-
ignore_if_not_exists
(bool
, default:True
) –If True, silently return if the catalog doesn't exist. If False, raise an error if the catalog doesn't exist. Defaults to True.
Raises:
-
CatalogNotFoundError
–If the catalog does not exist and ignore_if_not_exists is False
Returns:
-
bool
(bool
) –True if the catalog was dropped successfully, False if the catalog
-
bool
–didn't exist and ignore_if_not_exists is True.
Drop a non-existent catalog
# Try to drop a non-existent catalog
session.catalog.drop_catalog('my_catalog')
# Returns: False
Drop a non-existent catalog without ignore_if_not_exists
# Try to drop a non-existent catalog with ignore_if_not_exists=False
session.catalog.drop_catalog('my_catalog', ignore_if_not_exists=False)
# Raises: CatalogNotFoundError
Source code in src/fenic/api/catalog.py
160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 |
|
drop_database
drop_database(database_name: str, cascade: bool = False, ignore_if_not_exists: bool = True) -> bool
Drops a database.
Parameters:
-
database_name
(str
) –Fully qualified or relative database name to drop.
-
cascade
(bool
, default:False
) –If True, drop all tables in the database. Defaults to False.
-
ignore_if_not_exists
(bool
, default:True
) –If True, silently return if the database doesn't exist. If False, raise an error if the database doesn't exist. Defaults to True.
Raises:
-
DatabaseNotFoundError
–If the database does not exist and ignore_if_not_exists is False
-
CatalogError
–If the current database is being dropped, if the database is not empty and cascade is False
Returns:
-
bool
(bool
) –True if the database was dropped successfully, False if the database
-
bool
–didn't exist and ignore_if_not_exists is True.
Drop a non-existent database
# Try to drop a non-existent database
session.catalog.drop_database('my_database')
# Returns: False
Drop a non-existent database without ignore_if_not_exists
# Try to drop a non-existent database with ignore_if_not_exists=False
session.catalog.drop_database('my_database', ignore_if_not_exists=False)
# Raises: DatabaseNotFoundError
Source code in src/fenic/api/catalog.py
305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 |
|
drop_table
drop_table(table_name: str, ignore_if_not_exists: bool = True) -> bool
Drops the specified table.
By default this method will return False if the table doesn't exist.
Parameters:
-
table_name
(str
) –Fully qualified or relative table name to drop.
-
ignore_if_not_exists
(bool
, default:True
) –If True, return False when the table doesn't exist. If False, raise an error when the table doesn't exist. Defaults to True.
Returns:
-
bool
(bool
) –True if the table was dropped successfully, False if the table
-
bool
–didn't exist and ignore_if_not_exist is True.
Raises:
-
TableNotFoundError
–If the table doesn't exist and ignore_if_not_exists is False
Drop an existing table
# Drop an existing table 't1'
session.catalog.drop_table('t1')
# Returns: True
Drop a non-existent table with ignore_if_not_exists
# Try to drop a non-existent table with ignore_if_not_exists=True
session.catalog.drop_table('t2', ignore_if_not_exists=True)
# Returns: False
Drop a non-existent table without ignore_if_not_exists
# Try to drop a non-existent table with ignore_if_not_exists=False
session.catalog.drop_table('t2', ignore_if_not_exists=False)
# Raises: TableNotFoundError
Source code in src/fenic/api/catalog.py
407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 |
|
drop_view
drop_view(view_name: str, ignore_if_not_exists: bool = True) -> bool
Drops the specified view.
By default this method will return False if the view doesn't exist.
Parameters:
-
view_name
(str
) –Fully qualified or relative view name to drop.
-
ignore_if_not_exists
(bool
, default:True
) –If True, return False when the view doesn't exist. If False, raise an error when the view doesn't exist. Defaults to True.
Returns:
-
bool
(bool
) –True if the view was dropped successfully, False if the view didn't exist and ignore_if_not_exist is True.
Raises:
-
TableNotFoundError
–If the view doesn't exist and ignore_if_not_exists is False
Example: >>> # For an existing view 'v1' >>> session.catalog.drop_table('v1') True >>> # For a non-existent table 'v2' >>> session.catalog.drop_table('v2', ignore_if_not_exists=True) False >>> session.catalog.drop_table('v2', ignore_if_not_exists=False) # Raises TableNotFoundError.
Source code in src/fenic/api/catalog.py
529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
get_current_catalog
get_current_catalog() -> str
Returns the name of the current catalog.
Returns:
-
str
(str
) –The name of the current catalog.
Get current catalog name
# Get the name of the current catalog
session.catalog.get_current_catalog()
# Returns: 'default'
Source code in src/fenic/api/catalog.py
71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
|
get_current_database
get_current_database() -> str
Returns the name of the current database in the current catalog.
Returns:
-
str
(str
) –The name of the current database.
Get current database name
# Get the name of the current database
session.catalog.get_current_database()
# Returns: 'default'
Source code in src/fenic/api/catalog.py
214 215 216 217 218 219 220 221 222 223 224 225 226 227 |
|
list_catalogs
list_catalogs() -> List[str]
Returns a list of available catalogs.
Returns:
-
List[str]
–List[str]: A list of catalog names available in the system.
-
List[str]
–Returns an empty list if no catalogs are found.
List all catalogs
# Get all available catalogs
session.catalog.list_catalogs()
# Returns: ['default', 'my_catalog', 'other_catalog']
Source code in src/fenic/api/catalog.py
104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
|
list_databases
list_databases() -> List[str]
Returns a list of databases in the current catalog.
Returns:
-
List[str]
–List[str]: A list of database names in the current catalog.
-
List[str]
–Returns an empty list if no databases are found.
List all databases
# Get all databases in the current catalog
session.catalog.list_databases()
# Returns: ['default', 'my_database', 'other_database']
Source code in src/fenic/api/catalog.py
247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 |
|
list_tables
list_tables() -> List[str]
Returns a list of tables stored in the current database.
This method queries the current database to retrieve all available table names.
Returns:
-
List[str]
–List[str]: A list of table names stored in the database.
-
List[str]
–Returns an empty list if no tables are found.
List all tables
# Get all tables in the current database
session.catalog.list_tables()
# Returns: ['table1', 'table2', 'table3']
Source code in src/fenic/api/catalog.py
365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 |
|
list_views
list_views() -> List[str]
Returns a list of views stored in the current database.
This method queries the current database to retrieve all available view names.
Returns:
-
List[str]
–List[str]: A list of view names stored in the database.
-
List[str]
–Returns an empty list if no views are found.
Example
session.catalog.list_views() ['view1', 'view2', 'view3'].
Source code in src/fenic/api/catalog.py
498 499 500 501 502 503 504 505 506 507 508 509 510 511 |
|
set_current_catalog
set_current_catalog(catalog_name: str) -> None
Sets the current catalog.
Parameters:
-
catalog_name
(str
) –Name of the catalog to set as current.
Raises:
-
ValueError
–If the specified catalog doesn't exist.
Set current catalog
# Set 'my_catalog' as the current catalog
session.catalog.set_current_catalog('my_catalog')
Source code in src/fenic/api/catalog.py
86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
set_current_database
set_current_database(database_name: str) -> None
Sets the current database.
Parameters:
-
database_name
(str
) –Fully qualified or relative database name to set as current.
Raises:
-
DatabaseNotFoundError
–If the specified database doesn't exist.
Set current database
# Set 'my_database' as the current database
session.catalog.set_current_database('my_database')
Source code in src/fenic/api/catalog.py
229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 |
|
CloudConfig
Bases: BaseModel
Configuration for cloud-based execution.
This class defines settings for running operations in a cloud environment, allowing for scalable and distributed processing of language model operations.
Attributes:
-
size
(Optional[CloudExecutorSize]
) –Size of the cloud executor instance. If None, the default size will be used.
Example
Configuring cloud execution with a specific size:
config = CloudConfig(size=CloudExecutorSize.MEDIUM)
Using default cloud configuration:
config = CloudConfig()
CohereEmbeddingModel
Bases: BaseModel
Configuration for Cohere embedding models.
This class defines the configuration settings for Cohere embedding models, including model selection and rate limiting parameters.
Attributes:
-
model_name
(CohereEmbeddingModelName
) –The name of the Cohere model to use.
-
rpm
(int
) –Requests per minute limit for the model.
-
tpm
(int
) –Tokens per minute limit for the model.
-
profiles
(Optional[dict[str, Profile]]
) –Optional dictionary of profile configurations.
-
default_profile
(Optional[str]
) –Default profile name to use if none specified.
Example
Configuring a Cohere embedding model with profiles:
cohere_config = CohereEmbeddingModel(
model_name="embed-v4.0",
rpm=100,
tpm=50_000,
profiles={
"high_dim": CohereEmbeddingModel.Profile(
embedding_dimensionality=1536,
embedding_task_type="search_document"
),
"classification": CohereEmbeddingModel.Profile(
embedding_dimensionality=1024,
embedding_task_type="classification"
)
},
default_profile="high_dim"
)
Classes:
-
Profile
–Profile configurations for Cohere embedding models.
Profile
Bases: BaseModel
Profile configurations for Cohere embedding models.
This class defines profile configurations for Cohere embedding models, allowing different output dimensionality and task type settings to be applied to the same model.
Attributes:
-
output_dimensionality
(Optional[int]
) –The dimensionality of the embedding created by this model. If not provided, the model will use its default dimensionality.
-
input_type
(CohereEmbeddingTaskType
) –The type of input text (search_query, search_document, classification, clustering)
Example
Configuring a profile with custom dimensionality:
profile = CohereEmbeddingModel.Profile(output_dimensionality=1536)
Configuring a profile with default settings:
profile = CohereEmbeddingModel.Profile()
Column
A column expression in a DataFrame.
This class represents a column expression that can be used in DataFrame operations. It provides methods for accessing, transforming, and combining column data.
Create a column reference
# Reference a column by name using col() function
col("column_name")
Use column in operations
# Perform arithmetic operations
df.select(col("price") * col("quantity"))
Chain column operations
# Chain multiple operations
df.select(col("name").upper().contains("John"))
Methods:
-
alias
–Create an alias for this column.
-
asc
–Mark this column for ascending sort order.
-
asc_nulls_first
–Alias for asc().
-
asc_nulls_last
–Mark this column for ascending sort order with nulls last.
-
cast
–Cast the column to a new data type.
-
contains
–Check if the column contains a substring.
-
contains_any
–Check if the column contains any of the specified substrings.
-
desc
–Mark this column for descending sort order.
-
desc_nulls_first
–Alias for desc().
-
desc_nulls_last
–Mark this column for descending sort order with nulls last.
-
ends_with
–Check if the column ends with a substring.
-
get_item
–Access an item in a struct or array column.
-
ilike
–Check if the column matches a SQL LIKE pattern (case-insensitive).
-
is_in
–Check if the column is in a list of values or a column expression.
-
is_not_null
–Check if the column contains non-NULL values.
-
is_null
–Check if the column contains NULL values.
-
like
–Check if the column matches a SQL LIKE pattern.
-
otherwise
–Evaluates a list of conditions and returns one of multiple possible result expressions.
-
rlike
–Check if the column matches a regular expression pattern.
-
starts_with
–Check if the column starts with a substring.
-
when
–Evaluates a list of conditions and returns one of multiple possible result expressions.
alias
alias(name: str) -> Column
Create an alias for this column.
This method assigns a new name to the column expression, which is useful for renaming columns or providing names for complex expressions.
Parameters:
-
name
(str
) –The alias name to assign
Returns:
-
Column
(Column
) –Column with the specified alias
Rename a column
# Rename a column to a new name
df.select(col("original_name").alias("new_name"))
Name a complex expression
# Give a name to a calculated column
df.select((col("price") * col("quantity")).alias("total_value"))
Source code in src/fenic/api/column.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
|
asc
asc() -> Column
Mark this column for ascending sort order.
Returns:
-
Column
(Column
) –A sort expression with ascending order and nulls first.
Sort by age in ascending order
# Sort a dataframe by age in ascending order
df.sort(col("age").asc()).show()
Source code in src/fenic/api/column.py
293 294 295 296 297 298 299 300 301 302 303 304 305 |
|
asc_nulls_first
asc_nulls_first() -> Column
Alias for asc().
Returns:
-
Column
(Column
) –A Column expression that provides a column and sort order to the sort function
Source code in src/fenic/api/column.py
307 308 309 310 311 312 313 |
|
asc_nulls_last
asc_nulls_last() -> Column
Mark this column for ascending sort order with nulls last.
Returns:
-
Column
(Column
) –A sort expression with ascending order and nulls last.
Sort by age in ascending order with nulls last
# Sort a dataframe by age in ascending order, with nulls appearing last
df.sort(col("age").asc_nulls_last()).show()
Source code in src/fenic/api/column.py
315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 |
|
cast
cast(data_type: DataType) -> Column
Cast the column to a new data type.
This method creates an expression that casts the column to a specified data type. The casting behavior depends on the source and target types:
Primitive type casting:
- Numeric types (IntegerType, FloatType, DoubleType) can be cast between each other
- Numeric types can be cast to/from StringType
- BooleanType can be cast to/from numeric types and StringType
- StringType cannot be directly cast to BooleanType (will raise TypeError)
Complex type casting:
- ArrayType can only be cast to another ArrayType (with castable element types)
- StructType can only be cast to another StructType (with matching/castable fields)
- Primitive types cannot be cast to/from complex types
Parameters:
-
data_type
(DataType
) –The target DataType to cast the column to
Returns:
-
Column
(Column
) –A Column representing the casted expression
Cast integer to string
# Convert an integer column to string type
df.select(col("int_col").cast(StringType))
Cast array of integers to array of strings
# Convert an array of integers to an array of strings
df.select(col("int_array").cast(ArrayType(element_type=StringType)))
Cast struct fields to different types
# Convert struct fields to different types
new_type = StructType([
StructField("id", StringType),
StructField("value", FloatType)
])
df.select(col("data_struct").cast(new_type))
Raises:
-
TypeError
–If the requested cast operation is not supported
Source code in src/fenic/api/column.py
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 |
|
contains
contains(other: Union[str, Column]) -> Column
Check if the column contains a substring.
This method creates a boolean expression that checks if each value in the column contains the specified substring. The substring can be either a literal string or a column expression.
Parameters:
-
other
(Union[str, Column]
) –The substring to search for (can be a string or column expression)
Returns:
-
Column
(Column
) –A boolean column indicating whether each value contains the substring
Find rows where name contains "john"
# Filter rows where the name column contains "john"
df.filter(col("name").contains("john"))
Find rows where text contains a dynamic pattern
# Filter rows where text contains a value from another column
df.filter(col("text").contains(col("pattern")))
Source code in src/fenic/api/column.py
331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 |
|
contains_any
contains_any(others: List[str], case_insensitive: bool = True) -> Column
Check if the column contains any of the specified substrings.
This method creates a boolean expression that checks if each value in the column contains any of the specified substrings. The matching can be case-sensitive or case-insensitive.
Parameters:
-
others
(List[str]
) –List of substrings to search for
-
case_insensitive
(bool
, default:True
) –Whether to perform case-insensitive matching (default: True)
Returns:
-
Column
(Column
) –A boolean column indicating whether each value contains any substring
Find rows where name contains "john" or "jane" (case-insensitive)
# Filter rows where name contains either "john" or "jane"
df.filter(col("name").contains_any(["john", "jane"]))
Case-sensitive matching
# Filter rows with case-sensitive matching
df.filter(col("name").contains_any(["John", "Jane"], case_insensitive=False))
Source code in src/fenic/api/column.py
362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 |
|
desc
desc() -> Column
Mark this column for descending sort order.
Returns:
-
Column
(Column
) –A sort expression with descending order.
Sort by age in descending order
# Sort a dataframe by age in descending order
df.sort(col("age").desc()).show()
Source code in src/fenic/api/column.py
248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 |
|
desc_nulls_first
desc_nulls_first() -> Column
Alias for desc().
Returns:
-
Column
(Column
) –A sort expression with descending order and nulls first.
Sort by age in descending order with nulls first
df.sort(col("age").desc_nulls_first()).show()
Source code in src/fenic/api/column.py
264 265 266 267 268 269 270 271 272 273 274 275 |
|
desc_nulls_last
desc_nulls_last() -> Column
Mark this column for descending sort order with nulls last.
Returns:
-
Column
(Column
) –A sort expression with descending order and nulls last.
Sort by age in descending order with nulls last
# Sort a dataframe by age in descending order, with nulls appearing last
df.sort(col("age").desc_nulls_last()).show()
Source code in src/fenic/api/column.py
277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 |
|
ends_with
ends_with(other: Union[str, Column]) -> Column
Check if the column ends with a substring.
This method creates a boolean expression that checks if each value in the column ends with the specified substring. The substring can be either a literal string or a column expression.
Parameters:
-
other
(Union[str, Column]
) –The substring to check for at the end (can be a string or column expression)
Returns:
-
Column
(Column
) –A boolean column indicating whether each value ends with the substring
Find rows where email ends with "@gmail.com"
df.filter(col("email").ends_with("@gmail.com"))
Find rows where text ends with a dynamic pattern
df.filter(col("text").ends_with(col("suffix")))
Raises:
-
ValueError
–If the substring ends with a regular expression anchor ($)
Source code in src/fenic/api/column.py
428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 |
|
get_item
get_item(key: Union[str, int, Column]) -> Column
Access an item in a struct or array column.
This method allows accessing elements in complex data types:
- For array columns, the key should be an integer index or a column expression that evaluates to an integer
- For struct columns, the key should be a literal field name
Parameters:
-
key
(Union[str, int]
) –The index (for arrays) or field name (for structs) to access
Returns:
-
Column
(Column
) –A Column representing the accessed item
Access an array element
# Get the first element from an array column
df.select(col("array_column").get_item(0))
Access a struct field
# Get a field from a struct column
df.select(col("struct_column").get_item("field_name"))
Source code in src/fenic/api/column.py
82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 |
|
ilike
ilike(other: str) -> Column
Check if the column matches a SQL LIKE pattern (case-insensitive).
This method creates a boolean expression that checks if each value in the column matches the specified SQL LIKE pattern, ignoring case. The pattern must be a literal string and cannot be a column expression.
SQL LIKE pattern syntax:
- % matches any sequence of characters
- _ matches any single character
Parameters:
-
other
(str
) –The SQL LIKE pattern to match against
Returns:
-
Column
(Column
) –A boolean column indicating whether each value matches the pattern
Find rows where name starts with "j" and ends with "n" (case-insensitive)
# Filter rows where name matches the pattern "j%n" (case-insensitive)
df.filter(col("name").ilike("j%n"))
Find rows where code matches pattern (case-insensitive)
# Filter rows where code matches the pattern "a_b%" (case-insensitive)
df.filter(col("code").ilike("a_b%"))
Source code in src/fenic/api/column.py
521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 |
|
is_in
is_in(other: Union[List[Any], ColumnOrName]) -> Column
Check if the column is in a list of values or a column expression.
Parameters:
-
other
(Union[List[Any], ColumnOrName]
) –A list of values or a Column expression
Returns:
-
Column
(Column
) –A Column expression representing whether each element of Column is in the list
Check if name is in a list of values
# Filter rows where name is in a list of values
df.filter(col("name").is_in(["Alice", "Bob"]))
Check if value is in another column
# Filter rows where name is in another column
df.filter(col("name").is_in(col("other_column")))
Source code in src/fenic/api/column.py
669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 |
|
is_not_null
is_not_null() -> Column
Check if the column contains non-NULL values.
This method creates an expression that evaluates to TRUE when the column value is not NULL.
Returns:
-
Column
(Column
) –A Column representing a boolean expression that is TRUE when this column is not NULL
Filter rows where a column is not NULL
df.filter(col("some_column").is_not_null())
Use in a complex condition
df.filter(col("col1").is_not_null() & (col("col2") <= 50))
Source code in src/fenic/api/column.py
575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 |
|
is_null
is_null() -> Column
Check if the column contains NULL values.
This method creates an expression that evaluates to TRUE when the column value is NULL.
Returns:
-
Column
(Column
) –A Column representing a boolean expression that is TRUE when this column is NULL
Filter rows where a column is NULL
# Filter rows where some_column is NULL
df.filter(col("some_column").is_null())
Use in a complex condition
# Filter rows where col1 is NULL or col2 is greater than 100
df.filter(col("col1").is_null() | (col("col2") > 100))
Source code in src/fenic/api/column.py
553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 |
|
like
like(other: str) -> Column
Check if the column matches a SQL LIKE pattern.
This method creates a boolean expression that checks if each value in the column matches the specified SQL LIKE pattern. The pattern must be a literal string and cannot be a column expression.
SQL LIKE pattern syntax:
- % matches any sequence of characters
- _ matches any single character
Parameters:
-
other
(str
) –The SQL LIKE pattern to match against
Returns:
-
Column
(Column
) –A boolean column indicating whether each value matches the pattern
Find rows where name starts with "J" and ends with "n"
# Filter rows where name matches the pattern "J%n"
df.filter(col("name").like("J%n"))
Find rows where code matches specific pattern
# Filter rows where code matches the pattern "A_B%"
df.filter(col("code").like("A_B%"))
Source code in src/fenic/api/column.py
489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 |
|
otherwise
otherwise(value: Column) -> Column
Evaluates a list of conditions and returns one of multiple possible result expressions.
If Column.otherwise() is not invoked, None is returned for unmatched conditions. Otherwise() will return for rows with None inputs.
Parameters:
-
value
(Column
) –A literal value or Column expression to return
Returns:
-
Column
(Column
) –A Column expression representing whether each element of Column is not matched by any previous conditions
Use when/otherwise for conditional logic
# Create a DataFrame with age and name columns
df = session.createDataFrame(
{"age": [2, 5]}, {"name": ["Alice", "Bob"]}
)
# Use when/otherwise to create a case result column
df.select(
col("name"),
when(col("age") > 3, 1).otherwise(0).alias("case_result")
).show()
# Output:
# +-----+-----------+
# | name|case_result|
# +-----+-----------+
# |Alice| 0|
# | Bob| 1|
# +-----+-----------+
Source code in src/fenic/api/column.py
634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 |
|
rlike
rlike(other: str) -> Column
Check if the column matches a regular expression pattern.
This method creates a boolean expression that checks if each value in the column matches the specified regular expression pattern. The pattern must be a literal string and cannot be a column expression.
Parameters:
-
other
(str
) –The regular expression pattern to match against
Returns:
-
Column
(Column
) –A boolean column indicating whether each value matches the pattern
Find rows where phone number matches pattern
# Filter rows where phone number matches a specific pattern
df.filter(col("phone").rlike(r"^\d{3}-\d{3}-\d{4}$"))
Find rows where text contains word boundaries
# Filter rows where text contains a word with boundaries
df.filter(col("text").rlike(r"\bhello\b"))
Source code in src/fenic/api/column.py
462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 |
|
starts_with
starts_with(other: Union[str, Column]) -> Column
Check if the column starts with a substring.
This method creates a boolean expression that checks if each value in the column starts with the specified substring. The substring can be either a literal string or a column expression.
Parameters:
-
other
(Union[str, Column]
) –The substring to check for at the start (can be a string or column expression)
Returns:
-
Column
(Column
) –A boolean column indicating whether each value starts with the substring
Find rows where name starts with "Mr"
# Filter rows where name starts with "Mr"
df.filter(col("name").starts_with("Mr"))
Find rows where text starts with a dynamic pattern
# Filter rows where text starts with a value from another column
df.filter(col("text").starts_with(col("prefix")))
Raises:
-
ValueError
–If the substring starts with a regular expression anchor (^)
Source code in src/fenic/api/column.py
392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 |
|
when
when(condition: Column, value: Column) -> Column
Evaluates a list of conditions and returns one of multiple possible result expressions.
If Column.otherwise() is not invoked, None is returned for unmatched conditions. Otherwise() will return for rows with None inputs.
Parameters:
-
condition
(Column
) –A boolean Column expression
-
value
(Column
) –A literal value or Column expression to return if the condition is true
Returns:
-
Column
(Column
) –A Column expression representing whether each element of Column matches the condition
Raises:
-
TypeError
–If the condition is not a boolean Column expression
Use when/otherwise for conditional logic
# Create a DataFrame with age and name columns
df = session.createDataFrame(
{"age": [2, 5]}, {"name": ["Alice", "Bob"]}
)
# Use when/otherwise to create a case result column
df.select(
col("name"),
when(col("age") > 3, 1).otherwise(0).alias("case_result")
).show()
# Output:
# +-----+-----------+
# | name|case_result|
# +-----+-----------+
# |Alice| 0|
# | Bob| 1|
# +-----+-----------+
Source code in src/fenic/api/column.py
595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 |
|
DataFrame
A data collection organized into named columns.
The DataFrame class represents a lazily evaluated computation on data. Operations on DataFrame build up a logical query plan that is only executed when an action like show(), to_polars(), to_pandas(), to_arrow(), to_pydict(), to_pylist(), or count() is called.
The DataFrame supports method chaining for building complex transformations.
Create and transform a DataFrame
# Create a DataFrame from a dictionary
df = session.create_dataframe({"id": [1, 2, 3], "value": ["a", "b", "c"]})
# Chain transformations
result = df.filter(col("id") > 1).select("id", "value")
# Show results
result.show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 2| b|
# | 3| c|
# +---+-----+
Methods:
-
agg
–Aggregate on the entire DataFrame without groups.
-
cache
–Alias for persist(). Mark DataFrame for caching after first computation.
-
collect
–Execute the DataFrame computation and return the result as a QueryResult.
-
count
–Count the number of rows in the DataFrame.
-
drop
–Remove one or more columns from this DataFrame.
-
drop_duplicates
–Return a DataFrame with duplicate rows removed.
-
explain
–Display the logical plan of the DataFrame.
-
explode
–Create a new row for each element in an array column.
-
filter
–Filters rows using the given condition.
-
group_by
–Groups the DataFrame using the specified columns.
-
join
–Joins this DataFrame with another DataFrame.
-
limit
–Limits the number of rows to the specified number.
-
lineage
–Create a Lineage object to trace data through transformations.
-
order_by
–Sort the DataFrame by the specified columns. Alias for sort().
-
persist
–Mark this DataFrame to be persisted after first computation.
-
select
–Projects a set of Column expressions or column names.
-
show
–Display the DataFrame content in a tabular form.
-
sort
–Sort the DataFrame by the specified columns.
-
to_arrow
–Execute the DataFrame computation and return an Apache Arrow Table.
-
to_pandas
–Execute the DataFrame computation and return a Pandas DataFrame.
-
to_polars
–Execute the DataFrame computation and return the result as a Polars DataFrame.
-
to_pydict
–Execute the DataFrame computation and return a dictionary of column arrays.
-
to_pylist
–Execute the DataFrame computation and return a list of row dictionaries.
-
union
–Return a new DataFrame containing the union of rows in this and another DataFrame.
-
unnest
–Unnest the specified struct columns into separate columns.
-
where
–Filters rows using the given condition (alias for filter()).
-
with_column
–Add a new column or replace an existing column.
-
with_column_renamed
–Rename a column. No-op if the column does not exist.
Attributes:
-
columns
(List[str]
) –Get list of column names.
-
schema
(Schema
) –Get the schema of this DataFrame.
-
semantic
(SemanticExtensions
) –Interface for semantic operations on the DataFrame.
-
write
(DataFrameWriter
) –Interface for saving the content of the DataFrame.
columns
property
columns: List[str]
Get list of column names.
Returns:
-
List[str]
–List[str]: List of all column names in the DataFrame
Examples:
>>> df.columns
['name', 'age', 'city']
schema
property
schema: Schema
Get the schema of this DataFrame.
Returns:
-
Schema
(Schema
) –Schema containing field names and data types
Examples:
>>> df.schema
Schema([
ColumnField('name', StringType),
ColumnField('age', IntegerType)
])
semantic
property
semantic: SemanticExtensions
Interface for semantic operations on the DataFrame.
write
property
write: DataFrameWriter
Interface for saving the content of the DataFrame.
Returns:
-
DataFrameWriter
(DataFrameWriter
) –Writer interface to write DataFrame.
agg
agg(*exprs: Union[Column, Dict[str, str]]) -> DataFrame
Aggregate on the entire DataFrame without groups.
This is equivalent to group_by() without any grouping columns.
Parameters:
-
*exprs
(Union[Column, Dict[str, str]]
, default:()
) –Aggregation expressions or dictionary of aggregations.
Returns:
-
DataFrame
(DataFrame
) –Aggregation results.
Multiple aggregations
# Create sample DataFrame
df = session.create_dataframe({
"salary": [80000, 70000, 90000, 75000, 85000],
"age": [25, 30, 35, 28, 32]
})
# Multiple aggregations
df.agg(
count().alias("total_rows"),
avg(col("salary")).alias("avg_salary")
).show()
# Output:
# +----------+-----------+
# |total_rows|avg_salary|
# +----------+-----------+
# | 5| 80000.0|
# +----------+-----------+
Dictionary style
# Dictionary style
df.agg({col("salary"): "avg", col("age"): "max"}).show()
# Output:
# +-----------+--------+
# |avg(salary)|max(age)|
# +-----------+--------+
# | 80000.0| 35|
# +-----------+--------+
Source code in src/fenic/api/dataframe/dataframe.py
1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 |
|
cache
cache() -> DataFrame
Alias for persist(). Mark DataFrame for caching after first computation.
Returns:
-
DataFrame
(DataFrame
) –Same DataFrame, but marked for caching
See Also
persist(): Full documentation of caching behavior
Source code in src/fenic/api/dataframe/dataframe.py
413 414 415 416 417 418 419 420 421 422 |
|
collect
collect(data_type: DataLikeType = 'polars') -> QueryResult
Execute the DataFrame computation and return the result as a QueryResult.
This is an action that triggers computation of the DataFrame query plan. All transformations and operations are executed, and the results are materialized into a QueryResult, which contains both the result data and the query metrics.
Parameters:
-
data_type
(DataLikeType
, default:'polars'
) –The type of data to return
Returns:
-
QueryResult
(QueryResult
) –A QueryResult with materialized data and query metrics
Source code in src/fenic/api/dataframe/dataframe.py
247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 |
|
count
count() -> int
Count the number of rows in the DataFrame.
This is an action that triggers computation of the DataFrame. The output is an integer representing the number of rows.
Returns:
-
int
(int
) –The number of rows in the DataFrame
Source code in src/fenic/api/dataframe/dataframe.py
345 346 347 348 349 350 351 352 353 354 |
|
drop
drop(*col_names: str) -> DataFrame
Remove one or more columns from this DataFrame.
Parameters:
-
*col_names
(str
, default:()
) –Names of columns to drop.
Returns:
-
DataFrame
(DataFrame
) –New DataFrame without specified columns.
Raises:
-
ValueError
–If any specified column doesn't exist in the DataFrame.
-
ValueError
–If dropping the columns would result in an empty DataFrame.
Drop single column
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2, 3],
"name": ["Alice", "Bob", "Charlie"],
"age": [25, 30, 35]
})
# Drop single column
df.drop("age").show()
# Output:
# +---+-------+
# | id| name|
# +---+-------+
# | 1| Alice|
# | 2| Bob|
# | 3|Charlie|
# +---+-------+
Drop multiple columns
# Drop multiple columns
df.drop(col("id"), "age").show()
# Output:
# +-------+
# | name|
# +-------+
# | Alice|
# | Bob|
# |Charlie|
# +-------+
Error when dropping non-existent column
# This will raise a ValueError
df.drop("non_existent_column")
# ValueError: Column 'non_existent_column' not found in DataFrame
Source code in src/fenic/api/dataframe/dataframe.py
717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 |
|
drop_duplicates
drop_duplicates(subset: Optional[List[str]] = None) -> DataFrame
Return a DataFrame with duplicate rows removed.
Parameters:
-
subset
(Optional[List[str]]
, default:None
) –Column names to consider when identifying duplicates. If not provided, all columns are considered.
Returns:
-
DataFrame
(DataFrame
) –A new DataFrame with duplicate rows removed.
Raises:
-
ValueError
–If a specified column is not present in the current DataFrame schema.
Remove duplicates considering specific columns
# Create sample DataFrame
df = session.create_dataframe({
"c1": [1, 2, 3, 1],
"c2": ["a", "a", "a", "a"],
"c3": ["b", "b", "b", "b"]
})
# Remove duplicates considering all columns
df.drop_duplicates([col("c1"), col("c2"), col("c3")]).show()
# Output:
# +---+---+---+
# | c1| c2| c3|
# +---+---+---+
# | 1| a| b|
# | 2| a| b|
# | 3| a| b|
# +---+---+---+
# Remove duplicates considering only c1
df.drop_duplicates([col("c1")]).show()
# Output:
# +---+---+---+
# | c1| c2| c3|
# +---+---+---+
# | 1| a| b|
# | 2| a| b|
# | 3| a| b|
# +---+---+---+
Source code in src/fenic/api/dataframe/dataframe.py
1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 |
|
explain
explain() -> None
Display the logical plan of the DataFrame.
Source code in src/fenic/api/dataframe/dataframe.py
227 228 229 |
|
explode
explode(column: ColumnOrName) -> DataFrame
Create a new row for each element in an array column.
This operation is useful for flattening nested data structures. For each row in the input DataFrame that contains an array/list in the specified column, this method will: 1. Create N new rows, where N is the length of the array 2. Each new row will be identical to the original row, except the array column will contain just a single element from the original array 3. Rows with NULL values or empty arrays in the specified column are filtered out
Parameters:
-
column
(ColumnOrName
) –Name of array column to explode (as string) or Column expression.
Returns:
-
DataFrame
(DataFrame
) –New DataFrame with the array column exploded into multiple rows.
Raises:
-
TypeError
–If column argument is not a string or Column.
Explode array column
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2, 3, 4],
"tags": [["red", "blue"], ["green"], [], None],
"name": ["Alice", "Bob", "Carol", "Dave"]
})
# Explode the tags column
df.explode("tags").show()
# Output:
# +---+-----+-----+
# | id| tags| name|
# +---+-----+-----+
# | 1| red|Alice|
# | 1| blue|Alice|
# | 2|green| Bob|
# +---+-----+-----+
Using column expression
# Explode using column expression
df.explode(col("tags")).show()
# Output:
# +---+-----+-----+
# | id| tags| name|
# +---+-----+-----+
# | 1| red|Alice|
# | 1| blue|Alice|
# | 2|green| Bob|
# +---+-----+-----+
Source code in src/fenic/api/dataframe/dataframe.py
1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 |
|
filter
filter(condition: Column) -> DataFrame
Filters rows using the given condition.
Parameters:
-
condition
(Column
) –A Column expression that evaluates to a boolean
Returns:
-
DataFrame
(DataFrame
) –Filtered DataFrame
Filter with numeric comparison
# Create a DataFrame
df = session.create_dataframe({"age": [25, 30, 35], "name": ["Alice", "Bob", "Charlie"]})
# Filter with numeric comparison
df.filter(col("age") > 25).show()
# Output:
# +---+-------+
# |age| name|
# +---+-------+
# | 30| Bob|
# | 35|Charlie|
# +---+-------+
Filter with semantic predicate
# Filter with semantic predicate
df.filter((col("age") > 25) & semantic.predicate("This {feedback} mentions problems with the user interface or navigation")).show()
# Output:
# +---+-------+
# |age| name|
# +---+-------+
# | 30| Bob|
# | 35|Charlie|
# +---+-------+
Filter with multiple conditions
# Filter with multiple conditions
df.filter((col("age") > 25) & (col("age") <= 35)).show()
# Output:
# +---+-------+
# |age| name|
# +---+-------+
# | 30| Bob|
# | 35|Charlie|
# +---+-------+
Source code in src/fenic/api/dataframe/dataframe.py
508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 |
|
group_by
group_by(*cols: ColumnOrName) -> GroupedData
Groups the DataFrame using the specified columns.
Parameters:
-
*cols
(ColumnOrName
, default:()
) –Columns to group by. Can be column names as strings or Column expressions.
Returns:
-
GroupedData
(GroupedData
) –Object for performing aggregations on the grouped data.
Group by single column
# Create sample DataFrame
df = session.create_dataframe({
"department": ["IT", "HR", "IT", "HR", "IT"],
"salary": [80000, 70000, 90000, 75000, 85000]
})
# Group by single column
df.group_by(col("department")).count().show()
# Output:
# +----------+-----+
# |department|count|
# +----------+-----+
# | IT| 3|
# | HR| 2|
# +----------+-----+
Group by multiple columns
# Group by multiple columns
df.group_by(col("department"), col("location")).agg({"salary": "avg"}).show()
# Output:
# +----------+--------+-----------+
# |department|location|avg(salary)|
# +----------+--------+-----------+
# | IT| NYC| 85000.0|
# | HR| NYC| 72500.0|
# +----------+--------+-----------+
Group by expression
# Group by expression
df.group_by(col("age").cast("int").alias("age_group")).count().show()
# Output:
# +---------+-----+
# |age_group|count|
# +---------+-----+
# | 20| 2|
# | 30| 3|
# | 40| 1|
# +---------+-----+
Source code in src/fenic/api/dataframe/dataframe.py
1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 |
|
join
join(other: DataFrame, on: Union[str, List[str]], *, how: JoinType = 'inner') -> DataFrame
join(other: DataFrame, *, left_on: Union[ColumnOrName, List[ColumnOrName]], right_on: Union[ColumnOrName, List[ColumnOrName]], how: JoinType = 'inner') -> DataFrame
join(other: DataFrame, on: Optional[Union[str, List[str]]] = None, *, left_on: Optional[Union[ColumnOrName, List[ColumnOrName]]] = None, right_on: Optional[Union[ColumnOrName, List[ColumnOrName]]] = None, how: JoinType = 'inner') -> DataFrame
Joins this DataFrame with another DataFrame.
The Dataframes must have no duplicate column names between them. This API only supports equi-joins. For non-equi-joins, use session.sql().
Parameters:
-
other
(DataFrame
) –DataFrame to join with.
-
on
(Optional[Union[str, List[str]]]
, default:None
) –Join condition(s). Can be: - A column name (str) - A list of column names (List[str]) - A Column expression (e.g., col('a')) - A list of Column expressions -
None
for cross joins -
left_on
(Optional[Union[ColumnOrName, List[ColumnOrName]]]
, default:None
) –Column(s) from the left DataFrame to join on. Can be: - A column name (str) - A Column expression (e.g., col('a'), col('a') + 1) - A list of column names or expressions
-
right_on
(Optional[Union[ColumnOrName, List[ColumnOrName]]]
, default:None
) –Column(s) from the right DataFrame to join on. Can be: - A column name (str) - A Column expression (e.g., col('b'), upper(col('b'))) - A list of column names or expressions
-
how
(JoinType
, default:'inner'
) –Type of join to perform.
Returns:
-
DataFrame
–Joined DataFrame.
Raises:
-
ValidationError
–If cross join is used with an ON clause.
-
ValidationError
–If join condition is invalid.
-
ValidationError
–If both 'on' and 'left_on'/'right_on' parameters are provided.
-
ValidationError
–If only one of 'left_on' or 'right_on' is provided.
-
ValidationError
–If 'left_on' and 'right_on' have different lengths
Inner join on column name
# Create sample DataFrames
df1 = session.create_dataframe({
"id": [1, 2, 3],
"name": ["Alice", "Bob", "Charlie"]
})
df2 = session.create_dataframe({
"id": [1, 2, 4],
"age": [25, 30, 35]
})
# Join on single column
df1.join(df2, on=col("id")).show()
# Output:
# +---+-----+---+
# | id| name|age|
# +---+-----+---+
# | 1|Alice| 25|
# | 2| Bob| 30|
# +---+-----+---+
Join with expression
# Join with Column expressions
df1.join(
df2,
left_on=col("id"),
right_on=col("id"),
).show()
# Output:
# +---+-----+---+
# | id| name|age|
# +---+-----+---+
# | 1|Alice| 25|
# | 2| Bob| 30|
# +---+-----+---+
Cross join
# Cross join (cartesian product)
df1.join(df2, how="cross").show()
# Output:
# +---+-----+---+---+
# | id| name| id|age|
# +---+-----+---+---+
# | 1|Alice| 1| 25|
# | 1|Alice| 2| 30|
# | 1|Alice| 4| 35|
# | 2| Bob| 1| 25|
# | 2| Bob| 2| 30|
# | 2| Bob| 4| 35|
# | 3|Charlie| 1| 25|
# | 3|Charlie| 2| 30|
# | 3|Charlie| 4| 35|
# +---+-----+---+---+
Source code in src/fenic/api/dataframe/dataframe.py
949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 |
|
limit
limit(n: int) -> DataFrame
Limits the number of rows to the specified number.
Parameters:
-
n
(int
) –Maximum number of rows to return.
Returns:
-
DataFrame
(DataFrame
) –DataFrame with at most n rows.
Raises:
-
TypeError
–If n is not an integer.
Limit rows
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2, 3, 4, 5],
"name": ["Alice", "Bob", "Charlie", "Dave", "Eve"]
})
# Get first 3 rows
df.limit(3).show()
# Output:
# +---+-------+
# | id| name|
# +---+-------+
# | 1| Alice|
# | 2| Bob|
# | 3|Charlie|
# +---+-------+
Limit with other operations
# Limit after filtering
df.filter(col("id") > 2).limit(2).show()
# Output:
# +---+-------+
# | id| name|
# +---+-------+
# | 3|Charlie|
# | 4| Dave|
# +---+-------+
Source code in src/fenic/api/dataframe/dataframe.py
881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 |
|
lineage
lineage() -> Lineage
Create a Lineage object to trace data through transformations.
The Lineage interface allows you to trace how specific rows are transformed through your DataFrame operations, both forwards and backwards through the computation graph.
Returns:
-
Lineage
(Lineage
) –Interface for querying data lineage
Example
# Create lineage query
lineage = df.lineage()
# Trace specific rows backwards through transformations
source_rows = lineage.backward(["result_uuid1", "result_uuid2"])
# Or trace forwards to see outputs
result_rows = lineage.forward(["source_uuid1"])
See Also
LineageQuery: Full documentation of lineage querying capabilities
Source code in src/fenic/api/dataframe/dataframe.py
356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 |
|
order_by
order_by(cols: Union[ColumnOrName, List[ColumnOrName], None] = None, ascending: Optional[Union[bool, List[bool]]] = None) -> DataFrame
Sort the DataFrame by the specified columns. Alias for sort().
Returns:
-
DataFrame
(DataFrame
) –sorted Dataframe.
See Also
sort(): Full documentation of sorting behavior and parameters.
Source code in src/fenic/api/dataframe/dataframe.py
1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 |
|
persist
persist() -> DataFrame
Mark this DataFrame to be persisted after first computation.
The persisted DataFrame will be cached after its first computation, avoiding recomputation in subsequent operations. This is useful for DataFrames that are reused multiple times in your workflow.
Returns:
-
DataFrame
(DataFrame
) –Same DataFrame, but marked for persistence
Example
# Cache intermediate results for reuse
filtered_df = (df
.filter(col("age") > 25)
.persist() # Cache these results
)
# Both operations will use cached results
result1 = filtered_df.group_by("department").count()
result2 = filtered_df.select("name", "salary")
Source code in src/fenic/api/dataframe/dataframe.py
383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 |
|
select
select(*cols: ColumnOrName) -> DataFrame
Projects a set of Column expressions or column names.
Parameters:
-
*cols
(ColumnOrName
, default:()
) –Column expressions to select. Can be: - String column names (e.g., "id", "name") - Column objects (e.g., col("id"), col("age") + 1)
Returns:
-
DataFrame
(DataFrame
) –A new DataFrame with selected columns
Select by column names
# Create a DataFrame
df = session.create_dataframe({"name": ["Alice", "Bob"], "age": [25, 30]})
# Select by column names
df.select(col("name"), col("age")).show()
# Output:
# +-----+---+
# | name|age|
# +-----+---+
# |Alice| 25|
# | Bob| 30|
# +-----+---+
Select with expressions
# Select with expressions
df.select(col("name"), col("age") + 1).show()
# Output:
# +-----+-------+
# | name|age + 1|
# +-----+-------+
# |Alice| 26|
# | Bob| 31|
# +-----+-------+
Mix strings and expressions
# Mix strings and expressions
df.select(col("name"), col("age") * 2).show()
# Output:
# +-----+-------+
# | name|age * 2|
# +-----+-------+
# |Alice| 50|
# | Bob| 60|
# +-----+-------+
Source code in src/fenic/api/dataframe/dataframe.py
424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 |
|
show
show(n: int = 10, explain_analyze: bool = False) -> None
Display the DataFrame content in a tabular form.
This is an action that triggers computation of the DataFrame. The output is printed to stdout in a formatted table.
Parameters:
-
n
(int
, default:10
) –Number of rows to display
-
explain_analyze
(bool
, default:False
) –Whether to print the explain analyze plan
Source code in src/fenic/api/dataframe/dataframe.py
231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 |
|
sort
sort(cols: Union[ColumnOrName, List[ColumnOrName], None] = None, ascending: Optional[Union[bool, List[bool]]] = None) -> DataFrame
Sort the DataFrame by the specified columns.
Parameters:
-
cols
(Union[ColumnOrName, List[ColumnOrName], None]
, default:None
) –Columns to sort by. This can be: - A single column name (str) - A Column expression (e.g.,
col("name")
) - A list of column names or Column expressions - Column expressions may include sorting directives such asasc("col")
,desc("col")
,asc_nulls_last("col")
, etc. - If no columns are provided, the operation is a no-op. -
ascending
(Optional[Union[bool, List[bool]]]
, default:None
) –A boolean or list of booleans indicating sort order. - If
True
, sorts in ascending order; ifFalse
, descending. - If a list is provided, its length must match the number of columns. - Cannot be used if any of the columns useasc()
/desc()
expressions. - If not specified and no sort expressions are used, columns will be sorted in ascending order by default.
Returns:
-
DataFrame
(DataFrame
) –A new DataFrame sorted by the specified columns.
Raises:
-
ValueError
–- If
ascending
is provided and its length does not matchcols
- If both
ascending
and column expressions likeasc()
/desc()
are used
- If
-
TypeError
–- If
cols
is not a column name, Column, or list of column names/Columns - If
ascending
is not a boolean or list of booleans
- If
Sort in ascending order
# Create sample DataFrame
df = session.create_dataframe([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
# Sort by age in ascending order
df.sort(asc(col("age"))).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 2|Alice|
# | 5| Bob|
# +---+-----+
Sort in descending order
# Sort by age in descending order
df.sort(col("age").desc()).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2|Alice|
# +---+-----+
Sort with boolean ascending parameter
# Sort by age in descending order using boolean
df.sort(col("age"), ascending=False).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2|Alice|
# +---+-----+
Multiple columns with different sort orders
# Create sample DataFrame
df = session.create_dataframe([(2, "Alice"), (2, "Bob"), (5, "Bob")], schema=["age", "name"])
# Sort by age descending, then name ascending
df.sort(desc(col("age")), col("name")).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2|Alice|
# | 2| Bob|
# +---+-----+
Multiple columns with list of ascending strategies
# Sort both columns in descending order
df.sort([col("age"), col("name")], ascending=[False, False]).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2| Bob|
# | 2|Alice|
# +---+-----+
Source code in src/fenic/api/dataframe/dataframe.py
1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 |
|
to_arrow
to_arrow() -> pa.Table
Execute the DataFrame computation and return an Apache Arrow Table.
This is an action that triggers computation of the DataFrame query plan. All transformations and operations are executed, and the results are materialized into an Apache Arrow Table with columnar memory layout optimized for analytics and zero-copy data exchange.
Returns:
-
Table
–pa.Table: An Apache Arrow Table containing the computed results
Source code in src/fenic/api/dataframe/dataframe.py
301 302 303 304 305 306 307 308 309 310 311 312 |
|
to_pandas
to_pandas() -> pd.DataFrame
Execute the DataFrame computation and return a Pandas DataFrame.
This is an action that triggers computation of the DataFrame query plan. All transformations and operations are executed, and the results are materialized into a Pandas DataFrame.
Returns:
-
DataFrame
–pd.DataFrame: A Pandas DataFrame containing the computed results with
Source code in src/fenic/api/dataframe/dataframe.py
289 290 291 292 293 294 295 296 297 298 299 |
|
to_polars
to_polars() -> pl.DataFrame
Execute the DataFrame computation and return the result as a Polars DataFrame.
This is an action that triggers computation of the DataFrame query plan. All transformations and operations are executed, and the results are materialized into a Polars DataFrame.
Returns:
-
DataFrame
–pl.DataFrame: A Polars DataFrame with materialized results
Source code in src/fenic/api/dataframe/dataframe.py
277 278 279 280 281 282 283 284 285 286 287 |
|
to_pydict
to_pydict() -> Dict[str, List[Any]]
Execute the DataFrame computation and return a dictionary of column arrays.
This is an action that triggers computation of the DataFrame query plan. All transformations and operations are executed, and the results are materialized into a Python dictionary where each column becomes a list of values.
Returns:
-
Dict[str, List[Any]]
–Dict[str, List[Any]]: A dictionary containing the computed results with: - Keys: Column names as strings - Values: Lists containing all values for each column
Source code in src/fenic/api/dataframe/dataframe.py
314 315 316 317 318 319 320 321 322 323 324 325 326 |
|
to_pylist
to_pylist() -> List[Dict[str, Any]]
Execute the DataFrame computation and return a list of row dictionaries.
This is an action that triggers computation of the DataFrame query plan. All transformations and operations are executed, and the results are materialized into a Python list where each element is a dictionary representing one row with column names as keys.
Returns:
-
List[Dict[str, Any]]
–List[Dict[str, Any]]: A list containing the computed results with: - Each element: A dictionary representing one row - Dictionary keys: Column names as strings - Dictionary values: Cell values in Python native types - List length equals number of rows in the result
Source code in src/fenic/api/dataframe/dataframe.py
328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 |
|
union
union(other: DataFrame) -> DataFrame
Return a new DataFrame containing the union of rows in this and another DataFrame.
This is equivalent to UNION ALL in SQL. To remove duplicates, use drop_duplicates() after union().
Parameters:
-
other
(DataFrame
) –Another DataFrame with the same schema.
Returns:
-
DataFrame
(DataFrame
) –A new DataFrame containing rows from both DataFrames.
Raises:
-
ValueError
–If the DataFrames have different schemas.
-
TypeError
–If other is not a DataFrame.
Union two DataFrames
# Create two DataFrames
df1 = session.create_dataframe({
"id": [1, 2],
"value": ["a", "b"]
})
df2 = session.create_dataframe({
"id": [3, 4],
"value": ["c", "d"]
})
# Union the DataFrames
df1.union(df2).show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 1| a|
# | 2| b|
# | 3| c|
# | 4| d|
# +---+-----+
Union with duplicates
# Create DataFrames with overlapping data
df1 = session.create_dataframe({
"id": [1, 2],
"value": ["a", "b"]
})
df2 = session.create_dataframe({
"id": [2, 3],
"value": ["b", "c"]
})
# Union with duplicates
df1.union(df2).show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 1| a|
# | 2| b|
# | 2| b|
# | 3| c|
# +---+-----+
# Remove duplicates after union
df1.union(df2).drop_duplicates().show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 1| a|
# | 2| b|
# | 3| c|
# +---+-----+
Source code in src/fenic/api/dataframe/dataframe.py
799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 |
|
unnest
unnest(*col_names: str) -> DataFrame
Unnest the specified struct columns into separate columns.
This operation flattens nested struct data by expanding each field of a struct into its own top-level column.
For each specified column containing a struct: 1. Each field in the struct becomes a separate column. 2. New columns are named after the corresponding struct fields. 3. The new columns are inserted into the DataFrame in place of the original struct column. 4. The overall column order is preserved.
Parameters:
-
*col_names
(str
, default:()
) –One or more struct columns to unnest. Each can be a string (column name) or a Column expression.
Returns:
-
DataFrame
(DataFrame
) –A new DataFrame with the specified struct columns expanded.
Raises:
-
TypeError
–If any argument is not a string or Column.
-
ValueError
–If a specified column does not contain struct data.
Unnest struct column
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2],
"tags": [{"red": 1, "blue": 2}, {"red": 3}],
"name": ["Alice", "Bob"]
})
# Unnest the tags column
df.unnest(col("tags")).show()
# Output:
# +---+---+----+-----+
# | id| red|blue| name|
# +---+---+----+-----+
# | 1| 1| 2|Alice|
# | 2| 3|null| Bob|
# +---+---+----+-----+
Unnest multiple struct columns
# Create sample DataFrame with multiple struct columns
df = session.create_dataframe({
"id": [1, 2],
"tags": [{"red": 1, "blue": 2}, {"red": 3}],
"info": [{"age": 25, "city": "NY"}, {"age": 30, "city": "LA"}],
"name": ["Alice", "Bob"]
})
# Unnest multiple struct columns
df.unnest(col("tags"), col("info")).show()
# Output:
# +---+---+----+---+----+-----+
# | id| red|blue|age|city| name|
# +---+---+----+---+----+-----+
# | 1| 1| 2| 25| NY|Alice|
# | 2| 3|null| 30| LA| Bob|
# +---+---+----+---+----+-----+
Source code in src/fenic/api/dataframe/dataframe.py
1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 |
|
where
where(condition: Column) -> DataFrame
Filters rows using the given condition (alias for filter()).
Parameters:
-
condition
(Column
) –A Column expression that evaluates to a boolean
Returns:
-
DataFrame
(DataFrame
) –Filtered DataFrame
See Also
filter(): Full documentation of filtering behavior
Source code in src/fenic/api/dataframe/dataframe.py
494 495 496 497 498 499 500 501 502 503 504 505 506 |
|
with_column
with_column(col_name: str, col: Union[Any, Column]) -> DataFrame
Add a new column or replace an existing column.
Parameters:
-
col_name
(str
) –Name of the new column
-
col
(Union[Any, Column]
) –Column expression or value to assign to the column. If not a Column, it will be treated as a literal value.
Returns:
-
DataFrame
(DataFrame
) –New DataFrame with added/replaced column
Add literal column
# Create a DataFrame
df = session.create_dataframe({"name": ["Alice", "Bob"], "age": [25, 30]})
# Add literal column
df.with_column("constant", lit(1)).show()
# Output:
# +-----+---+--------+
# | name|age|constant|
# +-----+---+--------+
# |Alice| 25| 1|
# | Bob| 30| 1|
# +-----+---+--------+
Add computed column
# Add computed column
df.with_column("double_age", col("age") * 2).show()
# Output:
# +-----+---+----------+
# | name|age|double_age|
# +-----+---+----------+
# |Alice| 25| 50|
# | Bob| 30| 60|
# +-----+---+----------+
Replace existing column
# Replace existing column
df.with_column("age", col("age") + 1).show()
# Output:
# +-----+---+
# | name|age|
# +-----+---+
# |Alice| 26|
# | Bob| 31|
# +-----+---+
Add column with complex expression
# Add column with complex expression
df.with_column(
"age_category",
when(col("age") < 30, "young")
.when(col("age") < 50, "middle")
.otherwise("senior")
).show()
# Output:
# +-----+---+------------+
# | name|age|age_category|
# +-----+---+------------+
# |Alice| 25| young|
# | Bob| 30| middle|
# +-----+---+------------+
Source code in src/fenic/api/dataframe/dataframe.py
564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 |
|
with_column_renamed
with_column_renamed(col_name: str, new_col_name: str) -> DataFrame
Rename a column. No-op if the column does not exist.
Parameters:
-
col_name
(str
) –Name of the column to rename.
-
new_col_name
(str
) –New name for the column.
Returns:
-
DataFrame
(DataFrame
) –New DataFrame with the column renamed.
Rename a column
# Create sample DataFrame
df = session.create_dataframe({
"age": [25, 30, 35],
"name": ["Alice", "Bob", "Charlie"]
})
# Rename a column
df.with_column_renamed("age", "age_in_years").show()
# Output:
# +------------+-------+
# |age_in_years| name|
# +------------+-------+
# | 25| Alice|
# | 30| Bob|
# | 35|Charlie|
# +------------+-------+
Rename multiple columns
# Rename multiple columns
df = (df
.with_column_renamed("age", "age_in_years")
.with_column_renamed("name", "full_name")
).show()
# Output:
# +------------+----------+
# |age_in_years|full_name |
# +------------+----------+
# | 25| Alice|
# | 30| Bob|
# | 35| Charlie|
# +------------+----------+
Source code in src/fenic/api/dataframe/dataframe.py
651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 |
|
DataFrameReader
DataFrameReader(session_state: BaseSessionState)
Interface used to load a DataFrame from external storage systems.
Similar to PySpark's DataFrameReader.
Creates a DataFrameReader.
Parameters:
-
session_state
(BaseSessionState
) –The session state to use for reading
Methods:
-
csv
–Load a DataFrame from one or more CSV files.
-
parquet
–Load a DataFrame from one or more Parquet files.
Source code in src/fenic/api/io/reader.py
25 26 27 28 29 30 31 32 |
|
csv
csv(paths: Union[str, Path, list[Union[str, Path]]], schema: Optional[Schema] = None, merge_schemas: bool = False) -> DataFrame
Load a DataFrame from one or more CSV files.
Parameters:
-
paths
(Union[str, Path, list[Union[str, Path]]]
) –A single file path, a glob pattern (e.g., "data/*.csv"), or a list of paths.
-
schema
(Optional[Schema]
, default:None
) –(optional) A complete schema definition of column names and their types. Only primitive types are supported. - For e.g.: - Schema([ColumnField(name="id", data_type=IntegerType), ColumnField(name="name", data_type=StringType)]) - If provided, all files must match this schema exactly—all column names must be present, and values must be convertible to the specified types. Partial schemas are not allowed.
-
merge_schemas
(bool
, default:False
) –Whether to merge schemas across all files. - If True: Column names are unified across files. Missing columns are filled with nulls. Column types are inferred and widened as needed. - If False (default): Only accepts columns from the first file. Column types from the first file are inferred and applied across all files. If subsequent files do not have the same column name and order as the first file, an error is raised. - The "first file" is defined as: - The first file in lexicographic order (for glob patterns), or - The first file in the provided list (for lists of paths).
Notes
- The first row in each file is assumed to be a header row.
- Delimiters (e.g., comma, tab) are automatically inferred.
- You may specify either
schema
ormerge_schemas=True
, but not both. - Any date/datetime columns are cast to strings during ingestion.
Raises:
-
ValidationError
–If both
schema
andmerge_schemas=True
are provided. -
ValidationError
–If any path does not end with
.csv
. -
PlanError
–If schemas cannot be merged or if there's a schema mismatch when merge_schemas=False.
Read a single CSV file
df = session.read.csv("file.csv")
Read multiple CSV files with schema merging
df = session.read.csv("data/*.csv", merge_schemas=True)
Read CSV files with explicit schema
python
df = session.read.csv(
["a.csv", "b.csv"],
schema=Schema([
ColumnField(name="id", data_type=IntegerType),
ColumnField(name="value", data_type=FloatType)
])
)
Source code in src/fenic/api/io/reader.py
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
|
parquet
parquet(paths: Union[str, Path, list[Union[str, Path]]], merge_schemas: bool = False) -> DataFrame
Load a DataFrame from one or more Parquet files.
Parameters:
-
paths
(Union[str, Path, list[Union[str, Path]]]
) –A single file path, a glob pattern (e.g., "data/*.parquet"), or a list of paths.
-
merge_schemas
(bool
, default:False
) –If True, infers and merges schemas across all files. Missing columns are filled with nulls, and differing types are widened to a common supertype.
Behavior
- If
merge_schemas=False
(default), all files must match the schema of the first file exactly. Subsequent files must contain all columns from the first file with compatible data types. If any column is missing or has incompatible types, an error is raised. - If
merge_schemas=True
, column names are unified across all files, and data types are automatically widened to accommodate all values. - The "first file" is defined as:
- The first file in lexicographic order (for glob patterns), or
- The first file in the provided list (for lists of paths).
Notes
- Date and datetime columns are cast to strings during ingestion.
Raises:
-
ValidationError
–If any file does not have a
.parquet
extension. -
PlanError
–If schemas cannot be merged or if there's a schema mismatch when merge_schemas=False.
Read a single Parquet file
df = session.read.parquet("file.parquet")
Read multiple Parquet files
df = session.read.parquet("data/*.parquet")
Read Parquet files with schema merging
df = session.read.parquet(["a.parquet", "b.parquet"], merge_schemas=True)
Source code in src/fenic/api/io/reader.py
117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 |
|
DataFrameWriter
DataFrameWriter(dataframe: DataFrame)
Interface used to write a DataFrame to external storage systems.
Similar to PySpark's DataFrameWriter.
Initialize a DataFrameWriter.
Parameters:
-
dataframe
(DataFrame
) –The DataFrame to write.
Methods:
-
csv
–Saves the content of the DataFrame as a single CSV file with comma as the delimiter and headers in the first row.
-
parquet
–Saves the content of the DataFrame as a single Parquet file.
-
save_as_table
–Saves the content of the DataFrame as the specified table.
-
save_as_view
–Saves the content of the DataFrame as a view.
Source code in src/fenic/api/io/writer.py
27 28 29 30 31 32 33 |
|
csv
csv(file_path: Union[str, Path], mode: Literal['error', 'overwrite', 'ignore'] = 'overwrite') -> QueryMetrics
Saves the content of the DataFrame as a single CSV file with comma as the delimiter and headers in the first row.
Parameters:
-
file_path
(Union[str, Path]
) –Path to save the CSV file to
-
mode
(Literal['error', 'overwrite', 'ignore']
, default:'overwrite'
) –Write mode. Default is "overwrite". - error: Raises an error if file exists - overwrite: Overwrites the file if it exists - ignore: Silently ignores operation if file exists
Returns:
-
QueryMetrics
(QueryMetrics
) –The query metrics
Save with overwrite mode (default)
df.write.csv("output.csv") # Overwrites if exists
Save with error mode
df.write.csv("output.csv", mode="error") # Raises error if exists
Save with ignore mode
df.write.csv("output.csv", mode="ignore") # Skips if exists
Source code in src/fenic/api/io/writer.py
93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 |
|
parquet
parquet(file_path: Union[str, Path], mode: Literal['error', 'overwrite', 'ignore'] = 'overwrite') -> QueryMetrics
Saves the content of the DataFrame as a single Parquet file.
Parameters:
-
file_path
(Union[str, Path]
) –Path to save the Parquet file to
-
mode
(Literal['error', 'overwrite', 'ignore']
, default:'overwrite'
) –Write mode. Default is "overwrite". - error: Raises an error if file exists - overwrite: Overwrites the file if it exists - ignore: Silently ignores operation if file exists
Returns:
-
QueryMetrics
(QueryMetrics
) –The query metrics
Save with overwrite mode (default)
df.write.parquet("output.parquet") # Overwrites if exists
Save with error mode
df.write.parquet("output.parquet", mode="error") # Raises error if exists
Save with ignore mode
df.write.parquet("output.parquet", mode="ignore") # Skips if exists
Source code in src/fenic/api/io/writer.py
146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 |
|
save_as_table
save_as_table(table_name: str, mode: Literal['error', 'append', 'overwrite', 'ignore'] = 'error') -> QueryMetrics
Saves the content of the DataFrame as the specified table.
Parameters:
-
table_name
(str
) –Name of the table to save to
-
mode
(Literal['error', 'append', 'overwrite', 'ignore']
, default:'error'
) –Write mode. Default is "error". - error: Raises an error if table exists - append: Appends data to table if it exists - overwrite: Overwrites existing table - ignore: Silently ignores operation if table exists
Returns:
-
QueryMetrics
(QueryMetrics
) –The query metrics
Save with error mode (default)
df.write.save_as_table("my_table") # Raises error if table exists
Save with append mode
df.write.save_as_table("my_table", mode="append") # Adds to existing table
Save with overwrite mode
df.write.save_as_table("my_table", mode="overwrite") # Replaces existing table
Source code in src/fenic/api/io/writer.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
|
save_as_view
save_as_view(view_name: str) -> None
Saves the content of the DataFrame as a view.
Parameters:
-
view_name
(str
) –Name of the view to save to
Returns: None.
Source code in src/fenic/api/io/writer.py
78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
|
GoogleDeveloperEmbeddingModel
Bases: BaseModel
Configuration for Google Developer embedding models.
This class defines the configuration settings for Google embedding models available in Google Developer AI Studio, including model selection and rate limiting parameters. These models are accessible using a GOOGLE_API_KEY environment variable.
Attributes:
-
model_name
(GoogleDeveloperEmbeddingModelName
) –The name of the Google Developer embedding model to use.
-
rpm
(int
) –Requests per minute limit; must be greater than 0.
-
tpm
(int
) –Tokens per minute limit; must be greater than 0.
-
profiles
(Optional[dict[str, Profile]]
) –Optional mapping of profile names to profile configurations.
-
default_profile
(Optional[str]
) –The name of the default profile to use if profiles are configured.
Example
Configuring a Google Developer embedding model with rate limits:
config = GoogleDeveloperEmbeddingModelConfig(
model_name="gemini-embedding-001",
rpm=100,
tpm=1000
)
Configuring a Google Developer embedding model with profiles:
config = GoogleDeveloperEmbeddingModelConfig(
model_name="gemini-embedding-001",
rpm=100,
tpm=1000,
profiles={
"default": GoogleDeveloperEmbeddingModelConfig.Profile(),
"high_dim": GoogleDeveloperEmbeddingModelConfig.Profile(output_dimensionality=3072)
},
default_profile="default"
)
Classes:
-
Profile
–Profile configurations for Google Developer embedding models.
Profile
Bases: BaseModel
Profile configurations for Google Developer embedding models.
This class defines profile configurations for Google embedding models, allowing different output dimensionality and task type settings to be applied to the same model.
Attributes:
-
output_dimensionality
(Optional[int]
) –The dimensionality of the embedding created by this model. If not provided, the model will use its default dimensionality.
-
task_type
(GoogleEmbeddingTaskType
) –The type of task for the embedding model.
Example
Configuring a profile with custom dimensionality:
profile = GoogleDeveloperEmbeddingModelConfig.Profile(output_dimensionality=3072)
Configuring a profile with default settings:
profile = GoogleDeveloperEmbeddingModelConfig.Profile()
GoogleDeveloperLanguageModel
Bases: BaseModel
Configuration for Gemini models accessible through Google Developer AI Studio.
This class defines the configuration settings for Google Gemini models available in Google Developer AI Studio, including model selection and rate limiting parameters. These models are accessible using a GOOGLE_API_KEY environment variable.
Attributes:
-
model_name
(GoogleDeveloperLanguageModelName
) –The name of the Google Developer model to use.
-
rpm
(int
) –Requests per minute limit; must be greater than 0.
-
tpm
(int
) –Tokens per minute limit; must be greater than 0.
-
profiles
(Optional[dict[str, Profile]]
) –Optional mapping of profile names to profile configurations.
-
default_profile
(Optional[str]
) –The name of the default profile to use if profiles are configured.
Example
Configuring a Google Developer model with rate limits:
config = GoogleDeveloperLanguageModel(
model_name="gemini-2.0-flash",
rpm=100,
tpm=1000
)
Configuring a reasoning Google Developer model with profiles:
config = GoogleDeveloperLanguageModel(
model_name="gemini-2.5-flash",
rpm=100,
tpm=1000,
profiles={
"thinking_disabled": GoogleDeveloperLanguageModel.Profile(),
"fast": GoogleDeveloperLanguageModel.Profile(thinking_token_budget=1024),
"thorough": GoogleDeveloperLanguageModel.Profile(thinking_token_budget=8192)
},
default_profile="fast"
)
Classes:
-
Profile
–Profile configurations for Google Developer models.
Profile
Bases: BaseModel
Profile configurations for Google Developer models.
This class defines profile configurations for Google Gemini models, allowing different thinking/reasoning settings to be applied to the same model.
Attributes:
-
thinking_token_budget
(Optional[int]
) –If configuring a reasoning model, provide a thinking budget in tokens. If not provided, or if set to 0, thinking will be disabled for the profile (not supported on gemini-2.5-pro). To have the model automatically determine a thinking budget based on the complexity of the prompt, set this to -1. Note that Gemini models take this as a suggestion -- and not a hard limit. It is very possible for the model to generate far more thinking tokens than the suggested budget, and for the model to generate reasoning tokens even if thinking is disabled.
Example
Configuring a profile with a fixed thinking budget:
profile = GoogleDeveloperLanguageModel.Profile(thinking_token_budget=4096)
Configuring a profile with automatic thinking budget:
profile = GoogleDeveloperLanguageModel.Profile(thinking_token_budget=-1)
Configuring a profile with thinking disabled:
profile = GoogleDeveloperLanguageModel.Profile(thinking_token_budget=0)
GoogleVertexEmbeddingModel
Bases: BaseModel
Configuration for Google Vertex AI embedding models.
This class defines the configuration settings for Google embedding models available in Google Vertex AI, including model selection and rate limiting parameters. These models are accessible using Google Cloud credentials.
Attributes:
-
model_name
(GoogleVertexEmbeddingModelName
) –The name of the Google Vertex embedding model to use.
-
rpm
(int
) –Requests per minute limit; must be greater than 0.
-
tpm
(int
) –Tokens per minute limit; must be greater than 0.
-
profiles
(Optional[dict[str, Profile]]
) –Optional mapping of profile names to profile configurations.
-
default_profile
(Optional[str]
) –The name of the default profile to use if profiles are configured.
Example
Configuring a Google Vertex embedding model with rate limits:
embedding_model = GoogleVertexEmbeddingModel(
model_name="gemini-embedding-001",
rpm=100,
tpm=1000
)
Configuring a Google Vertex embedding model with profiles:
embedding_model = GoogleVertexEmbeddingModel(
model_name="gemini-embedding-001",
rpm=100,
tpm=1000,
profiles={
"default": GoogleVertexEmbeddingModel.Profile(),
"high_dim": GoogleVertexEmbeddingModel.Profile(output_dimensionality=3072)
},
default_profile="default"
)
Classes:
-
Profile
–Profile configurations for Google Vertex embedding models.
Profile
Bases: BaseModel
Profile configurations for Google Vertex embedding models.
This class defines profile configurations for Google embedding models, allowing different output dimensionality and task type settings to be applied to the same model.
Attributes:
-
output_dimensionality
(Optional[int]
) –The dimensionality of the embedding created by this model. If not provided, the model will use its default dimensionality.
-
task_type
(GoogleEmbeddingTaskType
) –The type of task for the embedding model.
Example
Configuring a profile with custom dimensionality:
profile = GoogleVertexEmbeddingModelConfig.Profile(output_dimensionality=3072)
Configuring a profile with default settings:
profile = GoogleVertexEmbeddingModelConfig.Profile()
GoogleVertexLanguageModel
Bases: BaseModel
Configuration for Google Vertex AI models.
This class defines the configuration settings for Google Gemini models available in Google Vertex AI, including model selection and rate limiting parameters. These models are accessible using Google Cloud credentials.
Attributes:
-
model_name
(GoogleVertexLanguageModelName
) –The name of the Google Vertex model to use.
-
rpm
(int
) –Requests per minute limit; must be greater than 0.
-
tpm
(int
) –Tokens per minute limit; must be greater than 0.
-
profiles
(Optional[dict[str, Profile]]
) –Optional mapping of profile names to profile configurations.
-
default_profile
(Optional[str]
) –The name of the default profile to use if profiles are configured.
Example
Configuring a Google Vertex model with rate limits:
config = GoogleVertexLanguageModel(
model_name="gemini-2.0-flash",
rpm=100,
tpm=1000
)
Configuring a reasoning Google Vertex model with profiles:
config = GoogleVertexLanguageModel(
model_name="gemini-2.5-flash",
rpm=100,
tpm=1000,
profiles={
"thinking_disabled": GoogleVertexLanguageModel.Profile(),
"fast": GoogleVertexLanguageModel.Profile(thinking_token_budget=1024),
"thorough": GoogleVertexLanguageModel.Profile(thinking_token_budget=8192)
},
default_profile="fast"
)
Classes:
-
Profile
–Profile configurations for Google Vertex models.
Profile
Bases: BaseModel
Profile configurations for Google Vertex models.
This class defines profile configurations for Google Gemini models, allowing different thinking/reasoning settings to be applied to the same underlying model.
Attributes:
-
thinking_token_budget
(Optional[int]
) –If configuring a reasoning model, provide a thinking budget in tokens. If not provided, or if set to 0, thinking will be disabled for the profile (not supported on gemini-2.5-pro). To have the model automatically determine a thinking budget based on the complexity of the prompt, set this to -1. Note that Gemini models take this as a suggestion -- and not a hard limit. It is very possible for the model to generate far more thinking tokens than the suggested budget, and for the model to generate reasoning tokens even if thinking is disabled.
Example
Configuring a profile with a fixed thinking budget:
profile = GoogleVertexLanguageModel.Profile(thinking_token_budget=4096)
Configuring a profile with automatic thinking budget:
profile = GoogleVertexLanguageModel.Profile(thinking_token_budget=-1)
Configuring a profile with thinking disabled:
profile = GoogleVertexLanguageModel.Profile(thinking_token_budget=0)
GroupedData
GroupedData(df: DataFrame, by: Optional[List[ColumnOrName]] = None)
Bases: BaseGroupedData
Methods for aggregations on a grouped DataFrame.
Initialize grouped data.
Parameters:
-
df
(DataFrame
) –The DataFrame to group.
-
by
(Optional[List[ColumnOrName]]
, default:None
) –Optional list of columns to group by.
Methods:
-
agg
–Compute aggregations on grouped data and return the result as a DataFrame.
Source code in src/fenic/api/dataframe/grouped_data.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|
agg
agg(*exprs: Union[Column, Dict[str, str]]) -> DataFrame
Compute aggregations on grouped data and return the result as a DataFrame.
This method applies aggregate functions to the grouped data.
Parameters:
-
*exprs
(Union[Column, Dict[str, str]]
, default:()
) –Aggregation expressions. Can be:
- Column expressions with aggregate functions (e.g.,
count("*")
,sum("amount")
) - A dictionary mapping column names to aggregate function names (e.g.,
{"amount": "sum", "age": "avg"}
)
- Column expressions with aggregate functions (e.g.,
Returns:
-
DataFrame
(DataFrame
) –A new DataFrame with one row per group and columns for group keys and aggregated values
Raises:
-
ValueError
–If arguments are not Column expressions or a dictionary
-
ValueError
–If dictionary values are not valid aggregate function names
Count employees by department
# Group by department and count employees
df.group_by("department").agg(count("*").alias("employee_count"))
Multiple aggregations
# Multiple aggregations
df.group_by("department").agg(
count("*").alias("employee_count"),
avg("salary").alias("avg_salary"),
max("age").alias("max_age")
)
Dictionary style aggregations
# Dictionary style for simple aggregations
df.group_by("department", "location").agg({"salary": "avg", "age": "max"})
Source code in src/fenic/api/dataframe/grouped_data.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
|
Lineage
Lineage(lineage: BaseLineage)
Query interface for tracing data lineage through a query plan.
This class allows you to navigate through the query plan both forwards and backwards, tracing how specific rows are transformed through each operation.
Example
# Create a lineage query starting from the root
query = LineageQuery(lineage, session.execution)
# Or start from a specific source
query.start_from_source("my_table")
# Trace rows backwards through a transformation
result = query.backward(["uuid1", "uuid2"])
# Trace rows forward to see their outputs
result = query.forward(["uuid3", "uuid4"])
Initialize a Lineage instance.
Parameters:
-
lineage
(BaseLineage
) –The underlying lineage implementation.
Methods:
-
backwards
–Trace rows backwards to see which input rows produced them.
-
forwards
–Trace rows forward to see how they are transformed by the next operation.
-
get_result_df
–Get the result of the query as a Polars DataFrame.
-
get_source_df
–Get a query source by name as a Polars DataFrame.
-
get_source_names
–Get the names of all sources in the query plan. Used to determine where to start the lineage traversal.
-
show
–Print the operator tree of the query.
-
skip_backwards
–[Not Implemented] Trace rows backwards through multiple operations at once.
-
skip_forwards
–[Not Implemented] Trace rows forward through multiple operations at once.
-
start_from_source
–Set the current position to a specific source in the query plan.
Source code in src/fenic/api/lineage.py
34 35 36 37 38 39 40 |
|
backwards
backwards(ids: List[str], branch_side: Optional[BranchSide] = None) -> pl.DataFrame
Trace rows backwards to see which input rows produced them.
Parameters:
-
ids
(List[str]
) –List of UUIDs identifying the rows to trace back
-
branch_side
(Optional[BranchSide]
, default:None
) –For operators with multiple inputs (like joins), specify which input to trace ("left" or "right"). Not needed for single-input operations.
Returns:
-
DataFrame
–DataFrame containing the source rows that produced the specified outputs
Raises:
-
ValueError
–If invalid ids format or incorrect branch_side specification
Example
# Simple backward trace
source_rows = query.backward(["result_uuid1"])
# Trace back through a join
left_rows = query.backward(["join_uuid1"], branch_side="left")
right_rows = query.backward(["join_uuid1"], branch_side="right")
Source code in src/fenic/api/lineage.py
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
|
forwards
forwards(row_ids: List[str]) -> pl.DataFrame
Trace rows forward to see how they are transformed by the next operation.
Parameters:
-
row_ids
(List[str]
) –List of UUIDs identifying the rows to trace
Returns:
-
DataFrame
–DataFrame containing the transformed rows in the next operation
Raises:
-
ValueError
–If at root node or if row_ids format is invalid
Example
# Trace how specific customer rows are transformed
transformed = query.forward(["customer_uuid1", "customer_uuid2"])
Source code in src/fenic/api/lineage.py
66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
|
get_result_df
get_result_df() -> pl.DataFrame
Get the result of the query as a Polars DataFrame.
Source code in src/fenic/api/lineage.py
150 151 152 |
|
get_source_df
get_source_df(source_name: str) -> pl.DataFrame
Get a query source by name as a Polars DataFrame.
Source code in src/fenic/api/lineage.py
154 155 156 157 |
|
get_source_names
get_source_names() -> List[str]
Get the names of all sources in the query plan. Used to determine where to start the lineage traversal.
Source code in src/fenic/api/lineage.py
42 43 44 45 |
|
show
show() -> None
Print the operator tree of the query.
Source code in src/fenic/api/lineage.py
47 48 49 |
|
skip_backwards
skip_backwards(ids: List[str]) -> Dict[str, pl.DataFrame]
[Not Implemented] Trace rows backwards through multiple operations at once.
This method will allow efficient tracing through multiple operations without intermediate results.
Parameters:
-
ids
(List[str]
) –List of UUIDs identifying the rows to trace back
Returns:
-
Dict[str, DataFrame]
–Dictionary mapping operation names to their source DataFrames
Raises:
-
NotImplementedError
–This method is not yet implemented
Source code in src/fenic/api/lineage.py
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
|
skip_forwards
skip_forwards(row_ids: List[str]) -> pl.DataFrame
[Not Implemented] Trace rows forward through multiple operations at once.
This method will allow efficient tracing through multiple operations without intermediate results.
Parameters:
-
row_ids
(List[str]
) –List of UUIDs identifying the rows to trace
Returns:
-
DataFrame
–DataFrame containing the final transformed rows
Raises:
-
NotImplementedError
–This method is not yet implemented
Source code in src/fenic/api/lineage.py
116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 |
|
start_from_source
start_from_source(source_name: str) -> None
Set the current position to a specific source in the query plan.
Parameters:
-
source_name
(str
) –Name of the source table to start from
Example
query.start_from_source("customers")
# Now you can trace forward from the customers table
Source code in src/fenic/api/lineage.py
51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
|
OpenAIEmbeddingModel
Bases: BaseModel
Configuration for OpenAI embedding models.
This class defines the configuration settings for OpenAI embedding models, including model selection and rate limiting parameters.
Attributes:
-
model_name
(OpenAIEmbeddingModelName
) –The name of the OpenAI embedding model to use.
-
rpm
(int
) –Requests per minute limit; must be greater than 0.
-
tpm
(int
) –Tokens per minute limit; must be greater than 0.
Example
Configuring an OpenAI embedding model with rate limits:
config = OpenAIEmbeddingModel(
model_name="text-embedding-3-small",
rpm=100,
tpm=100
)
OpenAILanguageModel
Bases: BaseModel
Configuration for OpenAI language models.
This class defines the configuration settings for OpenAI language models, including model selection and rate limiting parameters.
Attributes:
-
model_name
(OpenAILanguageModelName
) –The name of the OpenAI model to use.
-
rpm
(int
) –Requests per minute limit; must be greater than 0.
-
tpm
(int
) –Tokens per minute limit; must be greater than 0.
-
profiles
(Optional[dict[str, Profile]]
) –Optional mapping of profile names to profile configurations.
-
default_profile
(Optional[str]
) –The name of the default profile to use if profiles are configured.
Example
Configuring an OpenAI language model with rate limits:
config = OpenAILanguageModel(
model_name="gpt-4.1-nano",
rpm=100,
tpm=100
)
Configuring an OpenAI model with profiles:
config = OpenAILanguageModel(
model_name="o4-mini",
rpm=100,
tpm=100,
profiles={
"fast": OpenAILanguageModel.Profile(reasoning_effort="low"),
"thorough": OpenAILanguageModel.Profile(reasoning_effort="high")
},
default_profile="fast"
)
Using a profile in a semantic operation:
config = SemanticConfig(
language_models={
"o4": OpenAILanguageModel(
model_name="o4-mini",
rpm=1_000,
tpm=1_000_000,
profiles={
"fast": OpenAILanguageModel.Profile(reasoning_effort="low"),
"thorough": OpenAILanguageModel.Profile(reasoning_effort="high")
},
default_profile="fast"
)
},
default_language_model="o4"
)
# Will use the default "fast" profile for the "o4" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias="o4")
# Will use the "thorough" profile for the "o4" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias=ModelAlias(name="o4", profile="thorough"))
Classes:
-
Profile
–OpenAI-specific profile configurations.
Profile
Bases: BaseModel
OpenAI-specific profile configurations.
This class defines profile configurations for OpenAI models, allowing a user to reference the same underlying model in semantic operations with different settings. For now, only the reasoning effort can be customized.
Attributes:
-
reasoning_effort
(Optional[ReasoningEffort]
) –If configuring a reasoning model, provide a reasoning effort. OpenAI has separate o-series reasoning models, for which thinking cannot be disabled. If an o-series model is specified, but no
reasoning_effort
is provided, thereasoning_effort
will be set tolow
.
Note
When using an o-series reasoning model, the temperature
cannot be customized -- any changes to temperature
will be ignored.
Example
Configuring a profile with medium reasoning effort:
profile = OpenAILanguageModel.Profile(reasoning_effort="medium")
SemanticConfig
Bases: BaseModel
Configuration for semantic language and embedding models.
This class defines the configuration for both language models and optional embedding models used in semantic operations. It ensures that all configured models are valid and supported by their respective providers.
Attributes:
-
language_models
(Optional[dict[str, LanguageModel]]
) –Mapping of model aliases to language model configurations.
-
default_language_model
(Optional[str]
) –The alias of the default language model to use for semantic operations. Not required if only one language model is configured.
-
embedding_models
(Optional[dict[str, EmbeddingModel]]
) –Optional mapping of model aliases to embedding model configurations.
-
default_embedding_model
(Optional[str]
) –The alias of the default embedding model to use for semantic operations.
Note
The embedding model is optional and only required for operations that need semantic search or embedding capabilities.
Example
Configuring semantic models with a single language model:
config = SemanticConfig(
language_models={
"gpt4": OpenAILanguageModel(
model_name="gpt-4.1-nano",
rpm=100,
tpm=100
)
}
)
Configuring semantic models with multiple language models and an embedding model:
config = SemanticConfig(
language_models={
"gpt4": OpenAILanguageModel(
model_name="gpt-4.1-nano",
rpm=100,
tpm=100
),
"claude": AnthropicLanguageModel(
model_name="claude-3-5-haiku-latest",
rpm=100,
input_tpm=100,
output_tpm=100
),
"gemini": GoogleDeveloperLanguageModel(
model_name="gemini-2.0-flash",
rpm=100,
tpm=1000
)
},
default_language_model="gpt4",
embedding_models={
"openai_embeddings": OpenAIEmbeddingModel(
model_name="text-embedding-3-small",
rpm=100,
tpm=100
)
},
default_embedding_model="openai_embeddings"
)
Configuring models with profiles:
config = SemanticConfig(
language_models={
"gpt4": OpenAILanguageModel(
model_name="gpt-4o-mini",
rpm=100,
tpm=100,
profiles={
"fast": OpenAILanguageModel.Profile(reasoning_effort="low"),
"thorough": OpenAILanguageModel.Profile(reasoning_effort="high")
},
default_profile="fast"
),
"claude": AnthropicLanguageModel(
model_name="claude-3-5-haiku-latest",
rpm=100,
input_tpm=100,
output_tpm=100,
profiles={
"fast": AnthropicLanguageModel.Profile(thinking_token_budget=1024),
"thorough": AnthropicLanguageModel.Profile(thinking_token_budget=4096)
},
default_profile="fast"
)
},
default_language_model="gpt4"
)
Methods:
-
model_post_init
–Post initialization hook to set defaults.
-
validate_models
–Validates that the selected models are supported by the system.
model_post_init
model_post_init(__context) -> None
Post initialization hook to set defaults.
This hook runs after the model is initialized and validated. It sets the default language and embedding models if they are not set and there is only one model available.
Source code in src/fenic/api/session/config.py
793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 |
|
validate_models
validate_models() -> SemanticConfig
Validates that the selected models are supported by the system.
This validator checks that both the language model and embedding model (if provided) are valid and supported by their respective providers.
Returns:
-
SemanticConfig
–The validated SemanticConfig instance.
Raises:
-
ConfigurationError
–If any of the models are not supported.
Source code in src/fenic/api/session/config.py
826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 |
|
SemanticExtensions
SemanticExtensions(df: DataFrame)
A namespace for semantic dataframe operators.
Initialize semantic extensions.
Parameters:
-
df
(DataFrame
) –The DataFrame to extend with semantic operations.
Methods:
-
join
–Performs a semantic join between two DataFrames using a natural language predicate.
-
sim_join
–Performs a semantic similarity join between two DataFrames using embedding expressions.
-
with_cluster_labels
–Cluster rows using K-means and add cluster metadata columns.
Source code in src/fenic/api/dataframe/semantic_extensions.py
29 30 31 32 33 34 35 |
|
join
join(other: DataFrame, predicate: str, left_on: Column, right_on: Column, strict: bool = True, examples: Optional[JoinExampleCollection] = None, model_alias: Optional[Union[str, ModelAlias]] = None) -> DataFrame
Performs a semantic join between two DataFrames using a natural language predicate.
This method evaluates a boolean predicate for each potential row pair between the two DataFrames, including only those pairs where the predicate evaluates to True.
The join process: 1. For each row in the left DataFrame, evaluates the predicate in the jinja template against each row in the right DataFrame 2. Includes row pairs where the predicate returns True 3. Excludes row pairs where the predicate returns False 4. Returns a new DataFrame containing all columns from both DataFrames for the matched pairs
The jinja template must use exactly two column placeholders:
- One from the left DataFrame: {{ left_on }}
- One from the right DataFrame: {{ right_on }}
Parameters:
-
other
(DataFrame
) –The DataFrame to join with.
-
predicate
(str
) –A Jinja2 template containing the natural language predicate. Must include placeholders for exactly one column from each DataFrame. The template is evaluated as a boolean - True includes the pair, False excludes it.
-
left_on
(Column
) –The column from the left DataFrame (self) to use in the join predicate.
-
right_on
(Column
) –The column from the right DataFrame (other) to use in the join predicate.
-
strict
(bool
, default:True
) –If True, when either the left_on or right_on column has a None value for a row pair, that pair is automatically excluded from the join (predicate is not evaluated). If False, None values are rendered according to Jinja2's null rendering behavior. Default is True.
-
examples
(Optional[JoinExampleCollection]
, default:None
) –Optional JoinExampleCollection containing labeled examples to guide the join. Each example should have: - left: Sample value from the left column - right: Sample value from the right column - output: Boolean indicating whether this pair should be joined (True) or not (False)
-
model_alias
(Optional[Union[str, ModelAlias]]
, default:None
) –Optional alias for the language model to use. If None, uses the default model.
Returns:
-
DataFrame
(DataFrame
) –A new DataFrame containing matched row pairs with all columns from both DataFrames.
Basic semantic join
# Match job listings with candidate resumes based on title/skills
# Only includes pairs where the predicate evaluates to True
df_jobs.semantic.join(df_resumes,
predicate=dedent(''' Job Description: {{left_on}}
Candidate Background: {{right_on}}
The candidate is qualified for the job.'''),
left_on=col("job_description"),
right_on=col("work_experience"),
examples=examples
)
Semantic join with examples
# Improve join quality with examples
examples = JoinExampleCollection()
examples.create_example(JoinExample(
left="5 years experience building backend services in Python using asyncio, FastAPI, and PostgreSQL",
right="Senior Software Engineer - Backend",
output=True)) # This pair WILL be included in similar cases
examples.create_example(JoinExample(
left="5 years experience with growth strategy, private equity due diligence, and M&A",
right="Product Manager - Hardware",
output=False)) # This pair will NOT be included in similar cases
df_jobs.semantic.join(
other=df_resumes,
predicate=dedent(''' Job Description: {{left_on}}
Candidate Background: {{right_on}}
The candidate is qualified for the job.'''),
left_on=col("job_description"),
right_on=col("work_experience"),
examples=examples
)
Source code in src/fenic/api/dataframe/semantic_extensions.py
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 |
|
sim_join
sim_join(other: DataFrame, left_on: ColumnOrName, right_on: ColumnOrName, k: int = 1, similarity_metric: SemanticSimilarityMetric = 'cosine', similarity_score_column: Optional[str] = None) -> DataFrame
Performs a semantic similarity join between two DataFrames using embedding expressions.
For each row in the left DataFrame, returns the top k
most semantically similar rows
from the right DataFrame based on the specified similarity metric.
Parameters:
-
other
(DataFrame
) –The right-hand DataFrame to join with.
-
left_on
(ColumnOrName
) –Expression or column representing embeddings in the left DataFrame.
-
right_on
(ColumnOrName
) –Expression or column representing embeddings in the right DataFrame.
-
k
(int
, default:1
) –Number of most similar matches to return per row.
-
similarity_metric
(SemanticSimilarityMetric
, default:'cosine'
) –Similarity metric to use: "l2", "cosine", or "dot".
-
similarity_score_column
(Optional[str]
, default:None
) –If set, adds a column with this name containing similarity scores. If None, the scores are omitted.
Returns:
-
DataFrame
–A DataFrame containing one row for each of the top-k matches per row in the left DataFrame.
-
DataFrame
–The result includes all columns from both DataFrames, optionally augmented with a similarity score column
-
DataFrame
–if
similarity_score_column
is provided.
Raises:
-
ValidationError
–If
k
is not positive or if the columns are invalid. -
ValidationError
–If
similarity_metric
is not one of "l2", "cosine", "dot"
Match queries to FAQ entries
# Match customer queries to FAQ entries
df_queries.semantic.sim_join(
df_faqs,
left_on=embeddings(col("query_text")),
right_on=embeddings(col("faq_question")),
k=1
)
Link headlines to articles
# Link news headlines to full articles
df_headlines.semantic.sim_join(
df_articles,
left_on=embeddings(col("headline")),
right_on=embeddings(col("content")),
k=3,
return_similarity_scores=True
)
Find similar job postings
# Find similar job postings across two sources
df_linkedin.semantic.sim_join(
df_indeed,
left_on=embeddings(col("job_title")),
right_on=embeddings(col("job_description")),
k=2
)
Source code in src/fenic/api/dataframe/semantic_extensions.py
253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 |
|
with_cluster_labels
with_cluster_labels(by: ColumnOrName, num_clusters: int, max_iter: int = 300, num_init: int = 1, label_column: str = 'cluster_label', centroid_column: Optional[str] = None) -> DataFrame
Cluster rows using K-means and add cluster metadata columns.
This method clusters rows based on the given embedding column or expression using K-means. It adds a new column with cluster assignments, and optionally includes the centroid embedding for each assigned cluster.
Parameters:
-
by
(ColumnOrName
) –Column or expression producing embeddings to cluster (e.g.,
embed(col("text"))
). -
num_clusters
(int
) –Number of clusters to compute (must be > 0).
-
max_iter
(int
, default:300
) –Maximum iterations for a single run of the k-means algorithm. The algorithm stops when it either converges or reaches this limit.
-
num_init
(int
, default:1
) –Number of independent runs of k-means with different centroid seeds. The best result is selected.
-
label_column
(str
, default:'cluster_label'
) –Name of the output column for cluster IDs. Default is "cluster_label".
-
centroid_column
(Optional[str]
, default:None
) –If provided, adds a column with this name containing the centroid embedding for each row's assigned cluster.
Returns:
-
DataFrame
–A DataFrame with all original columns plus:
-
DataFrame
–<label_column>
: integer cluster assignment (0 to num_clusters - 1)
-
DataFrame
–<centroid_column>
: cluster centroid embedding, if specified
Basic clustering
# Cluster customer feedback and add cluster metadata
clustered_df = df.semantic.with_cluster_labels("feedback_embeddings", num_clusters=5)
# Then use regular operations to analyze clusters
clustered_df.group_by("cluster_label").agg(count("*"), avg("rating"))
Filter outliers using centroids
# Cluster and filter out rows far from their centroid
clustered_df = df.semantic.with_cluster_labels("embeddings", num_clusters=3, num_init=10, centroid_column="cluster_centroid")
clean_df = clustered_df.filter(
embedding.compute_similarity("embeddings", "cluster_centroid", metric="cosine") > 0.7
)
Source code in src/fenic/api/dataframe/semantic_extensions.py
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
|
Session
The entry point to programming with the DataFrame API. Similar to PySpark's SparkSession.
Create a session with default configuration
session = Session.get_or_create(SessionConfig(app_name="my_app"))
Create a session with cloud configuration
config = SessionConfig(
app_name="my_app",
cloud=True,
api_key="your_api_key"
)
session = Session.get_or_create(config)
Methods:
-
create_dataframe
–Create a DataFrame from a variety of Python-native data formats.
-
get_or_create
–Gets an existing Session or creates a new one with the configured settings.
-
sql
–Execute a read-only SQL query against one or more DataFrames using named placeholders.
-
stop
–Stops the session and closes all connections.
-
table
–Returns the specified table as a DataFrame.
-
view
–Returns the specified view as a DataFrame.
Attributes:
-
catalog
(Catalog
) –Interface for catalog operations on the Session.
-
read
(DataFrameReader
) –Returns a DataFrameReader that can be used to read data in as a DataFrame.
catalog
property
catalog: Catalog
Interface for catalog operations on the Session.
read
property
read: DataFrameReader
Returns a DataFrameReader that can be used to read data in as a DataFrame.
Returns:
-
DataFrameReader
(DataFrameReader
) –A reader interface to read data into DataFrame
Raises:
-
RuntimeError
–If the session has been stopped
create_dataframe
create_dataframe(data: DataLike) -> DataFrame
Create a DataFrame from a variety of Python-native data formats.
Parameters:
-
data
(DataLike
) –Input data. Must be one of: - Polars DataFrame - Pandas DataFrame - dict of column_name -> list of values - list of dicts (each dict representing a row) - pyarrow Table
Returns:
-
DataFrame
–A new DataFrame instance
Raises:
-
ValueError
–If the input format is unsupported or inconsistent with provided column names.
Create from Polars DataFrame
import polars as pl
df = pl.DataFrame({"col1": [1, 2], "col2": ["a", "b"]})
session.create_dataframe(df)
Create from Pandas DataFrame
import pandas as pd
df = pd.DataFrame({"col1": [1, 2], "col2": ["a", "b"]})
session.create_dataframe(df)
Create from dictionary
session.create_dataframe({"col1": [1, 2], "col2": ["a", "b"]})
Create from list of dictionaries
session.create_dataframe([
{"col1": 1, "col2": "a"},
{"col1": 2, "col2": "b"}
])
Create from pyarrow Table
import pyarrow as pa
table = pa.Table.from_pydict({"col1": [1, 2], "col2": ["a", "b"]})
session.create_dataframe(table)
Source code in src/fenic/api/session/session.py
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 |
|
get_or_create
classmethod
get_or_create(config: SessionConfig) -> Session
Gets an existing Session or creates a new one with the configured settings.
Returns:
-
Session
–A Session instance configured with the provided settings
Source code in src/fenic/api/session/session.py
62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
|
sql
sql(query: str, /, **tables: DataFrame) -> DataFrame
Execute a read-only SQL query against one or more DataFrames using named placeholders.
This allows you to execute ad hoc SQL queries using familiar syntax when it's more convenient than the DataFrame API.
Placeholders in the SQL string (e.g. {df}
) should correspond to keyword arguments (e.g. df=my_dataframe
).
For supported SQL syntax and functions, refer to the DuckDB SQL documentation: https://duckdb.org/docs/sql/introduction.
Parameters:
-
query
(str
) –A SQL query string with placeholders like
{df}
-
**tables
(DataFrame
, default:{}
) –Keyword arguments mapping placeholder names to DataFrames
Returns:
-
DataFrame
–A lazy DataFrame representing the result of the SQL query
Raises:
-
ValidationError
–If a placeholder is used in the query but not passed as a keyword argument
Simple join between two DataFrames
df1 = session.create_dataframe({"id": [1, 2]})
df2 = session.create_dataframe({"id": [2, 3]})
result = session.sql(
"SELECT * FROM {df1} JOIN {df2} USING (id)",
df1=df1,
df2=df2
)
Complex query with multiple DataFrames
users = session.create_dataframe({"user_id": [1, 2], "name": ["Alice", "Bob"]})
orders = session.create_dataframe({"order_id": [1, 2], "user_id": [1, 2]})
products = session.create_dataframe({"product_id": [1, 2], "name": ["Widget", "Gadget"]})
result = session.sql("""
SELECT u.name, p.name as product
FROM {users} u
JOIN {orders} o ON u.user_id = o.user_id
JOIN {products} p ON o.product_id = p.product_id
""", users=users, orders=orders, products=products)
Source code in src/fenic/api/session/session.py
265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 |
|
stop
stop()
Stops the session and closes all connections.
Source code in src/fenic/api/session/session.py
337 338 339 |
|
table
table(table_name: str) -> DataFrame
Returns the specified table as a DataFrame.
Parameters:
-
table_name
(str
) –Name of the table
Returns:
-
DataFrame
–Table as a DataFrame
Raises:
-
ValueError
–If the table does not exist
Load an existing table
df = session.table("my_table")
Source code in src/fenic/api/session/session.py
222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 |
|
view
view(view_name: str) -> DataFrame
Returns the specified view as a DataFrame.
Parameters:
-
view_name
(str
) –Name of the view
Returns: DataFrame: Dataframe with the given view
Source code in src/fenic/api/session/session.py
246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 |
|
SessionConfig
Bases: BaseModel
Configuration for a user session.
This class defines the complete configuration for a user session, including application settings, model configurations, and optional cloud settings. It serves as the central configuration object for all language model operations.
Attributes:
-
app_name
(str
) –Name of the application using this session. Defaults to "default_app".
-
db_path
(Optional[Path]
) –Optional path to a local database file for persistent storage.
-
semantic
(Optional[SemanticConfig]
) –Configuration for semantic models (optional).
-
cloud
(Optional[CloudConfig]
) –Optional configuration for cloud execution.
Note
The semantic configuration is optional. When not provided, only non-semantic operations are available. The cloud configuration is optional and only needed for distributed processing.
Example
Configuring a basic session with a single language model:
config = SessionConfig(
app_name="my_app",
semantic=SemanticConfig(
language_models={
"gpt4": OpenAILanguageModel(
model_name="gpt-4.1-nano",
rpm=100,
tpm=100
)
}
)
)
Configuring a session with multiple models and cloud execution:
config = SessionConfig(
app_name="production_app",
db_path=Path("/path/to/database.db"),
semantic=SemanticConfig(
language_models={
"gpt4": OpenAILanguageModel(
model_name="gpt-4.1-nano",
rpm=100,
tpm=100
),
"claude": AnthropicLanguageModel(
model_name="claude-3-5-haiku-latest",
rpm=100,
input_tpm=100,
output_tpm=100
)
},
default_language_model="gpt4",
embedding_models={
"openai_embeddings": OpenAIEmbeddingModel(
model_name="text-embedding-3-small",
rpm=100,
tpm=100
)
},
default_embedding_model="openai_embeddings"
),
cloud=CloudConfig(size=CloudExecutorSize.MEDIUM)
)
array
array(*args: Union[ColumnOrName, List[ColumnOrName], Tuple[ColumnOrName, ...]]) -> Column
Creates a new array column from multiple input columns.
Parameters:
-
*args
(Union[ColumnOrName, List[ColumnOrName], Tuple[ColumnOrName, ...]]
, default:()
) –Columns or column names to combine into an array. Can be:
- Individual arguments
- Lists of columns/column names
- Tuples of columns/column names
Returns:
-
Column
–A Column expression representing an array containing values from the input columns
Raises:
-
TypeError
–If any argument is not a Column, string, or collection of Columns/strings
Source code in src/fenic/api/functions/builtin.py
232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 |
|
array_agg
array_agg(column: ColumnOrName) -> Column
Alias for collect_list().
Source code in src/fenic/api/functions/builtin.py
165 166 167 168 |
|
array_contains
array_contains(column: ColumnOrName, value: Union[str, int, float, bool, Column]) -> Column
Checks if array column contains a specific value.
This function returns True if the array in the specified column contains the given value, and False otherwise. Returns False if the array is None.
Parameters:
-
column
(ColumnOrName
) –Column or column name containing the arrays to check.
-
value
(Union[str, int, float, bool, Column]
) –Value to search for in the arrays. Can be: - A literal value (string, number, boolean) - A Column expression
Returns:
-
Column
–A boolean Column expression (True if value is found, False otherwise).
Raises:
-
TypeError
–If value type is incompatible with the array element type.
-
TypeError
–If the column does not contain array data.
Check for values in arrays
# Check if 'python' exists in arrays in the 'tags' column
df.select(array_contains("tags", "python"))
# Check using a value from another column
df.select(array_contains("tags", col("search_term")))
Source code in src/fenic/api/functions/builtin.py
425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 |
|
array_size
array_size(column: ColumnOrName) -> Column
Returns the number of elements in an array column.
This function computes the length of arrays stored in the specified column. Returns None for None arrays.
Parameters:
-
column
(ColumnOrName
) –Column or column name containing arrays whose length to compute.
Returns:
-
Column
–A Column expression representing the array length.
Raises:
-
TypeError
–If the column does not contain array data.
Get array sizes
# Get the size of arrays in 'tags' column
df.select(array_size("tags"))
# Use with column reference
df.select(array_size(col("tags")))
Source code in src/fenic/api/functions/builtin.py
395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 |
|
asc
asc(column: ColumnOrName) -> Column
Mark this column for ascending sort order with nulls first.
Parameters:
-
column
(ColumnOrName
) –The column to apply the ascending ordering to.
Returns:
-
Column
–A sort expression with ascending order and nulls first.
Source code in src/fenic/api/functions/builtin.py
317 318 319 320 321 322 323 324 325 326 327 |
|
asc_nulls_first
asc_nulls_first(column: ColumnOrName) -> Column
Alias for asc().
Parameters:
-
column
(ColumnOrName
) –The column to apply the ascending ordering to.
Returns:
-
Column
–A sort expression with ascending order and nulls first.
Source code in src/fenic/api/functions/builtin.py
330 331 332 333 334 335 336 337 338 339 340 |
|
asc_nulls_last
asc_nulls_last(column: ColumnOrName) -> Column
Mark this column for ascending sort order with nulls last.
Parameters:
-
column
(ColumnOrName
) –The column to apply the ascending ordering to.
Returns:
-
Column
–A Column expression representing the column and the ascending sort order with nulls last.
Source code in src/fenic/api/functions/builtin.py
343 344 345 346 347 348 349 350 351 352 353 |
|
avg
avg(column: ColumnOrName) -> Column
Aggregate function: returns the average (mean) of all values in the specified column. Applies to numeric and embedding types.
Parameters:
-
column
(ColumnOrName
) –Column or column name to compute the average of
Returns:
-
Column
–A Column expression representing the average aggregation
Raises:
-
TypeError
–If column is not a Column or string
Source code in src/fenic/api/functions/builtin.py
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
|
coalesce
coalesce(*cols: ColumnOrName) -> Column
Returns the first non-null value from the given columns for each row.
This function mimics the behavior of SQL's COALESCE function. It evaluates the input columns in order and returns the first non-null value encountered. If all values are null, returns null.
Parameters:
-
*cols
(ColumnOrName
, default:()
) –Column expressions or column names to evaluate. Each argument should be a single column expression or column name string.
Returns:
-
Column
–A Column expression containing the first non-null value from the input columns.
Raises:
-
ValidationError
–If no columns are provided.
coalesce usage
df.select(coalesce("col1", "col2", "col3"))
Source code in src/fenic/api/functions/builtin.py
502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 |
|
col
col(col_name: str) -> Column
Creates a Column expression referencing a column in the DataFrame.
Parameters:
-
col_name
(str
) –Name of the column to reference
Returns:
-
Column
–A Column expression for the specified column
Raises:
-
TypeError
–If colName is not a string
Source code in src/fenic/api/functions/core.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
collect_list
collect_list(column: ColumnOrName) -> Column
Aggregate function: collects all values from the specified column into a list.
Parameters:
-
column
(ColumnOrName
) –Column or column name to collect values from
Returns:
-
Column
–A Column expression representing the list aggregation
Raises:
-
TypeError
–If column is not a Column or string
Source code in src/fenic/api/functions/builtin.py
148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 |
|
count
count(column: ColumnOrName) -> Column
Aggregate function: returns the count of non-null values in the specified column.
Parameters:
-
column
(ColumnOrName
) –Column or column name to count values in
Returns:
-
Column
–A Column expression representing the count aggregation
Raises:
-
TypeError
–If column is not a Column or string
Source code in src/fenic/api/functions/builtin.py
128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 |
|
desc
desc(column: ColumnOrName) -> Column
Mark this column for descending sort order with nulls first.
Parameters:
-
column
(ColumnOrName
) –The column to apply the descending ordering to.
Returns:
-
Column
–A sort expression with descending order and nulls first.
Source code in src/fenic/api/functions/builtin.py
356 357 358 359 360 361 362 363 364 365 366 |
|
desc_nulls_first
desc_nulls_first(column: ColumnOrName) -> Column
Alias for desc().
Parameters:
-
column
(ColumnOrName
) –The column to apply the descending ordering to.
Returns:
-
Column
–A sort expression with descending order and nulls first.
Source code in src/fenic/api/functions/builtin.py
369 370 371 372 373 374 375 376 377 378 379 |
|
desc_nulls_last
desc_nulls_last(column: ColumnOrName) -> Column
Mark this column for descending sort order with nulls last.
Parameters:
-
column
(ColumnOrName
) –The column to apply the descending ordering to.
Returns:
-
Column
–A sort expression with descending order and nulls last.
Source code in src/fenic/api/functions/builtin.py
382 383 384 385 386 387 388 389 390 391 392 |
|
first
first(column: ColumnOrName) -> Column
Aggregate function: returns the first non-null value in the specified column.
Typically used in aggregations to select the first observed value per group.
Parameters:
-
column
(ColumnOrName
) –Column or column name.
Returns:
-
Column
–Column expression for the first value.
Source code in src/fenic/api/functions/builtin.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 |
|
greatest
greatest(*cols: ColumnOrName) -> Column
Returns the greatest value from the given columns for each row.
This function mimics the behavior of SQL's GREATEST function. It evaluates the input columns in order and returns the greatest value encountered. If all values are null, returns null.
All arguments must be of the same primitive type (e.g., StringType, BooleanType, FloatType, IntegerType, etc).
Parameters:
-
*cols
(ColumnOrName
, default:()
) –Column expressions or column names to evaluate. Each argument should be a single column expression or column name string.
Returns:
-
Column
–A Column expression containing the greatest value from the input columns.
Raises:
-
ValidationError
–If fewer than two columns are provided.
greatest usage
df.select(fc.greatest("col1", "col2", "col3"))
Source code in src/fenic/api/functions/builtin.py
532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 |
|
least
least(*cols: ColumnOrName) -> Column
Returns the least value from the given columns for each row.
This function mimics the behavior of SQL's LEAST function. It evaluates the input columns in order and returns the least value encountered. If all values are null, returns null.
All arguments must be of the same primitive type (e.g., StringType, BooleanType, FloatType, IntegerType, etc).
Parameters:
-
*cols
(ColumnOrName
, default:()
) –Column expressions or column names to evaluate. Each argument should be a single column expression or column name string.
Returns:
-
Column
–A Column expression containing the least value from the input columns.
Raises:
-
ValidationError
–If fewer than two columns are provided.
least usage
df.select(fc.least("col1", "col2", "col3"))
Source code in src/fenic/api/functions/builtin.py
565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 |
|
lit
lit(value: Any) -> Column
Creates a Column expression representing a literal value.
Parameters:
-
value
(Any
) –The literal value to create a column for
Returns:
-
Column
–A Column expression representing the literal value
Raises:
-
ValueError
–If the type of the value cannot be inferred
Source code in src/fenic/api/functions/core.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
|
max
max(column: ColumnOrName) -> Column
Aggregate function: returns the maximum value in the specified column.
Parameters:
-
column
(ColumnOrName
) –Column or column name to compute the maximum of
Returns:
-
Column
–A Column expression representing the maximum aggregation
Raises:
-
TypeError
–If column is not a Column or string
Source code in src/fenic/api/functions/builtin.py
110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
|
mean
mean(column: ColumnOrName) -> Column
Aggregate function: returns the mean (average) of all values in the specified column.
Alias for avg().
Parameters:
-
column
(ColumnOrName
) –Column or column name to compute the mean of
Returns:
-
Column
–A Column expression representing the mean aggregation
Raises:
-
TypeError
–If column is not a Column or string
Source code in src/fenic/api/functions/builtin.py
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
|
min
min(column: ColumnOrName) -> Column
Aggregate function: returns the minimum value in the specified column.
Parameters:
-
column
(ColumnOrName
) –Column or column name to compute the minimum of
Returns:
-
Column
–A Column expression representing the minimum aggregation
Raises:
-
TypeError
–If column is not a Column or string
Source code in src/fenic/api/functions/builtin.py
92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
|
stddev
stddev(column: ColumnOrName) -> Column
Aggregate function: returns the sample standard deviation of the specified column.
Parameters:
-
column
(ColumnOrName
) –Column or column name.
Returns:
-
Column
–Column expression for sample standard deviation.
Source code in src/fenic/api/functions/builtin.py
186 187 188 189 190 191 192 193 194 195 196 197 198 |
|
struct
struct(*args: Union[ColumnOrName, List[ColumnOrName], Tuple[ColumnOrName, ...]]) -> Column
Creates a new struct column from multiple input columns.
Parameters:
-
*args
(Union[ColumnOrName, List[ColumnOrName], Tuple[ColumnOrName, ...]]
, default:()
) –Columns or column names to combine into a struct. Can be:
- Individual arguments
- Lists of columns/column names
- Tuples of columns/column names
Returns:
-
Column
–A Column expression representing a struct containing the input columns
Raises:
-
TypeError
–If any argument is not a Column, string, or collection of Columns/strings
Source code in src/fenic/api/functions/builtin.py
200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 |
|
sum
sum(column: ColumnOrName) -> Column
Aggregate function: returns the sum of all values in the specified column.
Parameters:
-
column
(ColumnOrName
) –Column or column name to compute the sum of
Returns:
-
Column
–A Column expression representing the sum aggregation
Raises:
-
TypeError
–If column is not a Column or string
Source code in src/fenic/api/functions/builtin.py
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
|
udf
udf(f: Optional[Callable] = None, *, return_type: DataType)
A decorator or function for creating user-defined functions (UDFs) that can be applied to DataFrame rows.
When applied, UDFs will:
- Access StructType
columns as Python dictionaries (dict[str, Any]
).
- Access ArrayType
columns as Python lists (list[Any]
).
- Access primitive types (e.g., int
, float
, str
) as their respective Python types.
Parameters:
-
f
(Optional[Callable]
, default:None
) –Python function to convert to UDF
-
return_type
(DataType
) –Expected return type of the UDF. Required parameter.
UDF with primitive types
# UDF with primitive types
@udf(return_type=IntegerType)
def add_one(x: int):
return x + 1
# Or
add_one = udf(lambda x: x + 1, return_type=IntegerType)
UDF with nested types
# UDF with nested types
@udf(return_type=StructType([StructField("value1", IntegerType), StructField("value2", IntegerType)]))
def example_udf(x: dict[str, int], y: list[int]):
return {
"value1": x["value1"] + x["value2"] + y[0],
"value2": x["value1"] + x["value2"] + y[1],
}
Source code in src/fenic/api/functions/builtin.py
264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 |
|
when
when(condition: Column, value: Column) -> Column
Evaluates a condition and returns a value if true.
This function is used to create conditional expressions. If Column.otherwise() is not invoked, None is returned for unmatched conditions.
Parameters:
-
condition
(Column
) –A boolean Column expression to evaluate.
-
value
(Column
) –A Column expression to return if the condition is true.
Returns:
-
Column
–A Column expression that evaluates the condition and returns the specified value when true,
-
Column
–and None otherwise.
Raises:
-
TypeError
–If the condition is not a boolean Column expression.
Basic conditional expression
# Basic usage
df.select(when(col("age") > 18, lit("adult")))
# With otherwise
df.select(when(col("age") > 18, lit("adult")).otherwise(lit("minor")))
Source code in src/fenic/api/functions/builtin.py
469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
|