TableDataSink
laktory.models.datasinks.TableDataSink
¤
Bases: BaseDataSink
Data Table data sink such as table on a Databricks catalog or on a data warehouse such as Snowflake, BigQuery, etc.
ATTRIBUTE | DESCRIPTION |
---|---|
checkpoint_location |
Path to which the checkpoint file for streaming dataframe should be written. |
catalog_name |
Name of the catalog of the source table |
table_name |
Name of the source table |
schema_name |
Name of the schema of the source table |
warehouse |
Type of warehouse to which the table should be published |
Examples:
from laktory import models
import pandas as pd
df = spark.createDataFrame(
pd.DataFrame(
{
"symbol": ["AAPL", "GOOGL"],
"price": [200.0, 205.0],
"tstamp": ["2023-09-01", "2023-09-01"],
}
)
)
sink = models.TableDataSink(
catalog_name="/Volumes/sources/landing/events/yahoo-finance/stock_price",
schema_name="finance",
table_name="slv_stock_prices",
mode="OVERWRITE",
)
# sink.write(df)
Attributes¤
Functions¤
purge
¤
purge(spark=None)
Delete sink data and checkpoints
Source code in laktory/models/datasinks/tabledatasink.py
165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 |
|
as_source
¤
as_source(as_stream=None)
Generate a table data source with the same properties as the sink.
PARAMETER | DESCRIPTION |
---|---|
as_stream |
If
DEFAULT:
|
RETURNS | DESCRIPTION |
---|---|
TableDataSource
|
Table Data Source |
Source code in laktory/models/datasinks/tabledatasink.py
190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 |
|