PipelineNode
laktory.models.PipelineNode
¤
Bases: BaseModel
, PipelineChild
Pipeline base component generating a DataFrame by reading a data source and applying a transformer (chain of dataframe transformations). Optional output to a data sink. Some basic transformations are also natively supported.
ATTRIBUTE | DESCRIPTION |
---|---|
add_layer_columns |
If
TYPE:
|
dlt_template |
Specify which template (notebook) to use if pipeline is run with
Databricks Delta Live Tables. If |
drop_duplicates |
If |
drop_source_columns |
If |
expectations |
List of expectations for the DataFrame. Can be used as warnings, drop invalid records or fail a pipeline.
TYPE:
|
layer |
Layer in the medallion architecture
TYPE:
|
name |
Name given to the node. Required to reference a node in a data source. |
primary_keys |
A list of column names that uniquely identify each row in the
DataFrame. These columns are used to:
- Document the uniqueness constraints of the node's output data.
- Define the default subset for dropping duplicate rows if no
explicit subset is provided in |
root_path |
Location of the pipeline node root used to store logs, metrics and checkpoints.
TYPE:
|
source |
Definition of the data source
TYPE:
|
sinks |
Definition of the data sink(s). Set
TYPE:
|
transformer |
Spark or Polars chain defining the data transformations applied to the data source.
TYPE:
|
timestamp_key |
Name of the column storing a timestamp associated with each row. It is used as the default column by the builder when creating watermarks.
TYPE:
|
Examples:
A node reading stock prices data from a CSV file and writing a DataFrame to disk as a parquet file.
import io
from laktory import models
node_yaml = '''
name: brz_stock_prices
layer: BRONZE
source:
format: CSV
path: ./raw/brz_stock_prices.csv
sinks:
- format: PARQUET
mode: OVERWRITE
path: ./dataframes/brz_stock_prices
'''
node = models.PipelineNode.model_validate_yaml(io.StringIO(node_yaml))
# node.execute(spark)
A node reading stock prices from an upstream node and writing a DataFrame to a data table.
import io
from laktory import models
node_yaml = '''
name: slv_stock_prices
layer: SILVER
source:
node_name: brz_stock_prices
sinks:
- catalog_name: hive_metastore
schema_name: default
table_name: slv_stock_prices
transformer:
nodes:
- with_column:
name: created_at
type: timestamp
expr: data.created_at
- with_column:
name: symbol
expr: data.symbol
- with_column:
name: close
type: double
expr: data.close
'''
node = models.PipelineNode.model_validate_yaml(io.StringIO(node_yaml))
# node.execute(spark)
METHOD | DESCRIPTION |
---|---|
push_df_backend |
Need to push dataframe_backend which is required to differentiate between spark and polars transformer |
execute |
Execute pipeline node by: |
check_expectations |
Check expectations, raise errors, warnings where required and build |
ATTRIBUTE | DESCRIPTION |
---|---|
is_orchestrator_dlt |
If
TYPE:
|
stage_df |
Dataframe resulting from reading source and applying transformer, before data quality checks are applied.
TYPE:
|
output_df |
Dataframe resulting from reading source, applying transformer and dropping rows not meeting data quality
TYPE:
|
quarantine_df |
DataFrame storing
TYPE:
|
output_sinks |
List of sinks writing the output DataFrame
TYPE:
|
quarantine_sinks |
List of sinks writing the quarantine DataFrame
TYPE:
|
all_sinks |
List of all sinks (output and quarantine).
|
sinks_count |
Total number of sinks.
TYPE:
|
has_output_sinks |
TYPE:
|
has_sinks |
TYPE:
|
primary_sink |
Primary output sink used as a source for downstream nodes.
TYPE:
|
upstream_node_names |
Pipeline node names required to execute current node. |
data_sources |
Get all sources feeding the pipeline node
TYPE:
|
Attributes¤
is_orchestrator_dlt
property
¤
is_orchestrator_dlt
If True
, pipeline node is used in the context of a DLT pipeline
stage_df
property
¤
stage_df
Dataframe resulting from reading source and applying transformer, before data quality checks are applied.
output_df
property
¤
output_df
Dataframe resulting from reading source, applying transformer and dropping rows not meeting data quality expectations.
quarantine_df
property
¤
quarantine_df
DataFrame storing stage_df
rows not meeting data quality expectations.
upstream_node_names
property
¤
upstream_node_names
Pipeline node names required to execute current node.
Functions¤
push_df_backend
classmethod
¤
push_df_backend(data)
Need to push dataframe_backend which is required to differentiate between spark and polars transformer
Source code in laktory/models/pipeline/pipelinenode.py
177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
|
execute
¤
execute(apply_transformer=True, spark=None, udfs=None, write_sinks=True, full_refresh=False)
Execute pipeline node by:
- Reading the source
- Applying the user defined (and layer-specific if applicable) transformations
- Checking expectations
- Writing the sinks
PARAMETER | DESCRIPTION |
---|---|
apply_transformer
|
Flag to apply transformer in the execution
TYPE:
|
spark
|
Spark session
TYPE:
|
udfs
|
User-defined functions |
write_sinks
|
Flag to include writing sink in the execution
TYPE:
|
full_refresh
|
If
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
AnyDataFrame
|
output Spark DataFrame |
Source code in laktory/models/pipeline/pipelinenode.py
755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 |
|
check_expectations
¤
check_expectations()
Check expectations, raise errors, warnings where required and build filtered and quarantine DataFrames.
Some actions have to be disabled when selected orchestrator is Databricks DLT:
- Raising error on Failure when expectation is supported by DLT
- Dropping rows when expectation is supported by DLT
Source code in laktory/models/pipeline/pipelinenode.py
856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 |
|