fenic.api.io
IO module for reading and writing DataFrames to external storage.
Classes:
-
DataFrameReader
–Interface used to load a DataFrame from external storage systems.
-
DataFrameWriter
–Interface used to write a DataFrame to external storage systems.
DataFrameReader
DataFrameReader(session_state: BaseSessionState)
Interface used to load a DataFrame from external storage systems.
Similar to PySpark's DataFrameReader.
Creates a DataFrameReader.
Parameters:
-
session_state
(BaseSessionState
) –The session state to use for reading
Methods:
-
csv
–Load a DataFrame from one or more CSV files.
-
parquet
–Load a DataFrame from one or more Parquet files.
Source code in src/fenic/api/io/reader.py
25 26 27 28 29 30 31 32 |
|
csv
csv(paths: Union[str, Path, list[Union[str, Path]]], schema: Optional[Schema] = None, merge_schemas: bool = False) -> DataFrame
Load a DataFrame from one or more CSV files.
Parameters:
-
paths
(Union[str, Path, list[Union[str, Path]]]
) –A single file path, a glob pattern (e.g., "data/*.csv"), or a list of paths.
-
schema
(Optional[Schema]
, default:None
) –(optional) A complete schema definition of column names and their types. Only primitive types are supported. - For e.g.: - Schema([ColumnField(name="id", data_type=IntegerType), ColumnField(name="name", data_type=StringType)]) - If provided, all files must match this schema exactly—all column names must be present, and values must be convertible to the specified types. Partial schemas are not allowed.
-
merge_schemas
(bool
, default:False
) –Whether to merge schemas across all files. - If True: Column names are unified across files. Missing columns are filled with nulls. Column types are inferred and widened as needed. - If False (default): Only accepts columns from the first file. Column types from the first file are inferred and applied across all files. If subsequent files do not have the same column name and order as the first file, an error is raised. - The "first file" is defined as: - The first file in lexicographic order (for glob patterns), or - The first file in the provided list (for lists of paths).
Notes
- The first row in each file is assumed to be a header row.
- Delimiters (e.g., comma, tab) are automatically inferred.
- You may specify either
schema
ormerge_schemas=True
, but not both. - Any date/datetime columns are cast to strings during ingestion.
Raises:
-
ValidationError
–If both
schema
andmerge_schemas=True
are provided. -
ValidationError
–If any path does not end with
.csv
. -
PlanError
–If schemas cannot be merged or if there's a schema mismatch when merge_schemas=False.
Read a single CSV file
df = session.read.csv("file.csv")
Read multiple CSV files with schema merging
df = session.read.csv("data/*.csv", merge_schemas=True)
Read CSV files with explicit schema
python
df = session.read.csv(
["a.csv", "b.csv"],
schema=Schema([
ColumnField(name="id", data_type=IntegerType),
ColumnField(name="value", data_type=FloatType)
])
)
Source code in src/fenic/api/io/reader.py
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
|
parquet
parquet(paths: Union[str, Path, list[Union[str, Path]]], merge_schemas: bool = False) -> DataFrame
Load a DataFrame from one or more Parquet files.
Parameters:
-
paths
(Union[str, Path, list[Union[str, Path]]]
) –A single file path, a glob pattern (e.g., "data/*.parquet"), or a list of paths.
-
merge_schemas
(bool
, default:False
) –If True, infers and merges schemas across all files. Missing columns are filled with nulls, and differing types are widened to a common supertype.
Behavior
- If
merge_schemas=False
(default), all files must match the schema of the first file exactly. Subsequent files must contain all columns from the first file with compatible data types. If any column is missing or has incompatible types, an error is raised. - If
merge_schemas=True
, column names are unified across all files, and data types are automatically widened to accommodate all values. - The "first file" is defined as:
- The first file in lexicographic order (for glob patterns), or
- The first file in the provided list (for lists of paths).
Notes
- Date and datetime columns are cast to strings during ingestion.
Raises:
-
ValidationError
–If any file does not have a
.parquet
extension. -
PlanError
–If schemas cannot be merged or if there's a schema mismatch when merge_schemas=False.
Read a single Parquet file
df = session.read.parquet("file.parquet")
Read multiple Parquet files
df = session.read.parquet("data/*.parquet")
Read Parquet files with schema merging
df = session.read.parquet(["a.parquet", "b.parquet"], merge_schemas=True)
Source code in src/fenic/api/io/reader.py
117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 |
|
DataFrameWriter
DataFrameWriter(dataframe: DataFrame)
Interface used to write a DataFrame to external storage systems.
Similar to PySpark's DataFrameWriter.
Initialize a DataFrameWriter.
Parameters:
-
dataframe
(DataFrame
) –The DataFrame to write.
Methods:
-
csv
–Saves the content of the DataFrame as a single CSV file with comma as the delimiter and headers in the first row.
-
parquet
–Saves the content of the DataFrame as a single Parquet file.
-
save_as_table
–Saves the content of the DataFrame as the specified table.
Source code in src/fenic/api/io/writer.py
27 28 29 30 31 32 33 |
|
csv
csv(file_path: Union[str, Path], mode: Literal['error', 'overwrite', 'ignore'] = 'overwrite') -> QueryMetrics
Saves the content of the DataFrame as a single CSV file with comma as the delimiter and headers in the first row.
Parameters:
-
file_path
(Union[str, Path]
) –Path to save the CSV file to
-
mode
(Literal['error', 'overwrite', 'ignore']
, default:'overwrite'
) –Write mode. Default is "overwrite". - error: Raises an error if file exists - overwrite: Overwrites the file if it exists - ignore: Silently ignores operation if file exists
Returns:
-
QueryMetrics
(QueryMetrics
) –The query metrics
Save with overwrite mode (default)
df.write.csv("output.csv") # Overwrites if exists
Save with error mode
df.write.csv("output.csv", mode="error") # Raises error if exists
Save with ignore mode
df.write.csv("output.csv", mode="ignore") # Skips if exists
Source code in src/fenic/api/io/writer.py
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
|
parquet
parquet(file_path: Union[str, Path], mode: Literal['error', 'overwrite', 'ignore'] = 'overwrite') -> QueryMetrics
Saves the content of the DataFrame as a single Parquet file.
Parameters:
-
file_path
(Union[str, Path]
) –Path to save the Parquet file to
-
mode
(Literal['error', 'overwrite', 'ignore']
, default:'overwrite'
) –Write mode. Default is "overwrite". - error: Raises an error if file exists - overwrite: Overwrites the file if it exists - ignore: Silently ignores operation if file exists
Returns:
-
QueryMetrics
(QueryMetrics
) –The query metrics
Save with overwrite mode (default)
df.write.parquet("output.parquet") # Overwrites if exists
Save with error mode
df.write.parquet("output.parquet", mode="error") # Raises error if exists
Save with ignore mode
df.write.parquet("output.parquet", mode="ignore") # Skips if exists
Source code in src/fenic/api/io/writer.py
130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
|
save_as_table
save_as_table(table_name: str, mode: Literal['error', 'append', 'overwrite', 'ignore'] = 'error') -> QueryMetrics
Saves the content of the DataFrame as the specified table.
Parameters:
-
table_name
(str
) –Name of the table to save to
-
mode
(Literal['error', 'append', 'overwrite', 'ignore']
, default:'error'
) –Write mode. Default is "error". - error: Raises an error if table exists - append: Appends data to table if it exists - overwrite: Overwrites existing table - ignore: Silently ignores operation if table exists
Returns:
-
QueryMetrics
(QueryMetrics
) –The query metrics
Save with error mode (default)
df.write.save_as_table("my_table") # Raises error if table exists
Save with append mode
df.write.save_as_table("my_table", mode="append") # Adds to existing table
Save with overwrite mode
df.write.save_as_table("my_table", mode="overwrite") # Replaces existing table
Source code in src/fenic/api/io/writer.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
|