Databricks Job
laktory.models.pipeline.orchestrators.databricksjoborchestrator.DatabricksJobOrchestrator
¤
Bases: Job
, PipelineChild
Databricks job used as an orchestrator to execute a Laktory pipeline.
Job orchestrator supports incremental workloads with Spark Structured Streaming, but it does not support continuous processing.
Selecting this orchestrator requires to add the supporting notebook to the stack.
ATTRIBUTE | DESCRIPTION |
---|---|
notebook_path |
Path for the notebook. If |
config_file |
Pipeline configuration (json) file deployed to the workspace and used by the job to read and execute the pipeline. |
requirements_file |
Pipeline requirements (json) file deployed to the workspace and used by the job to install the required python dependencies. |
ATTRIBUTE | DESCRIPTION |
---|---|
additional_core_resources |
TYPE:
|