![]() ![]() The Kubernetes Executor has an advantage over the Celery Executor in that Pods are only spun up when required for task execution compared to the Celery Executor where the workers are statically configured and are running all the time, regardless of workloads. One example of an Airflow deployment running on a distributed set of five nodes in a Kubernetes cluster is shown below. In contrast to the Celery Executor, the Kubernetes Executor does not require additional components such as Redis and Flower, but does require the Kubernetes infrastructure. The worker pod then runs the task, reports the result, and terminates. When a DAG submits a task, the KubernetesExecutor requests a worker pod from the Kubernetes API. The KubernetesExecutor requires a non-sqlite database in the backend, but there are no external brokers or persistent workers needed.įor these reasons, we recommend the KubernetesExecutor for deployments have long periods of dormancy between DAG execution. The KubernetesExecutor runs as a process in the Scheduler that only requires access to the Kubernetes API (it does not need to run inside of a Kubernetes cluster). Here is an example of a task with both features: This will replace the default pod_template_file named in the airflow.cfg and then override that template using the pod_override. ![]() You can also create custom pod_template_file on a per-task basis so that you can recycle the same base values between multiple tasks. The Kubernetes executor will create a new pod for every task instance.Įxample kubernetes files are available at scripts/in_container/kubernetes/app/, ) The kubernetes executor is introduced in Apache Airflow 1.10.0. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |