Managing Jobs Service with the Operator
This document describes how to configure the Jobs Service instance using the SonataFlowPlarform CR.
Automate the Jobs Service instance management with the SonataFlow
Operator
It is possible to deploy the Jobs Service manually, leveraging the Operator offers a more seamless integration by combining it with namespace configuration through the SonataFlowPlatform Custom Resource (CR). When the Operator oversees the lifecycle of the jobs service, it automatically injects necessary properties during creation into the SonataFlow workflows. This integration eliminates the need for including these properties within the SonataFlow workflow CR instance, simplifying workflow management.
Configuring Jobs Service in the SonataFlowPlatform CR
To enable the deployment of a Jobs Service instance, the SonataFlowPlatform
CRD exposes a set of fields that allow the user to
configure the running instance.
Ephemeral persistence
The basic runtime is to deploy the Jobs Service with an ephemeral backend running in the same container as the Jobs Service runtime.
---
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform
spec:
services:
jobService: {}
---
When executing this manifest, the operator will reconcile generating a pod hosting the Jobs Service:
---
$>kubectl get pod -n sonataflow
NAME READY STATUS RESTARTS AGE
sonataflow-platform-jobs-service-cdf85d969-sbwkj 1/1 Running 0 108s
---
Keep in mind that this setup is not recommended for production environments, especially because the data does not persist when the pod restarts.
Using an existing PostgreSQL service
For robust environments it is recommended to use an dedicated database service and configure Jobs Service to make use of it. Currently, the Jobs Service only supports PostgreSQL database.
Configuring Jobs Service to communicate with an existing PostgreSQL instance is supported in two ways. In both cases it requires providing the persistence
configuration, one by using the Jobs Service’s persistence field and the other one using the persistence field defined in the SonataFlowPlatform
CR
deployed in the same namespace.
By default, the persistence specification defined in the SonataFlow
workflow’s CR takes priority over the one in the SonataFlowPlatform
persistence specification.
Using the persistence field defined in the SonataFlowPlatform
CR
Using the persistence configuration in the SonataFlowPlatform
CR located in the same namespace requires to have the SonataFlow
CR persistence field configured
to have an empty {}
value, signaling the Operator to derive the persistence from the active SonataFlowPlatform
, when available. If no persistence is defined
the operator will fallback to the ephemeral persistence previously described.
---
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform
spec:
persistence:
postgresql:
secretRef:
name: postgres-secrets
userKey: POSTGRES_USER
passwordKey: POSTGRES_PASSWORD
serviceRef:
name: postgres
port: 5432
databaseName: sonataflow
---
And the SonataFlow
CR looks like this:
---
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlow
metadata:
name: callbackstatetimeouts
annotations:
sonataflow.org/description: Callback State Timeouts Example k8s
sonataflow.org/version: 0.0.1
spec:
persistence: {}
...
---
When using the jdbcUrl
field instead of serviceRef
, the user is responsible for providing the correct JDBC URL connection value that does not contain a database schema
because the operator will use verbatim the contents of this field as the JDBC connection in the Jobs Service pod, and if it provides a schema that has been used or formatted
by a different client, the pod will fail to run.
Using the persistence field inside the service specification
You can define the persistence configuration directly in the Jobs Service specification. The structure is the same as in the SonataFlowPlatform
CR and also
consist on the credentials to access the PostgreSQL instance, and the kubernetes service reference to generate the connectivity.
---
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: callbackstatetimeouts
spec:
persistence:
postgresql:
secretRef:
name: postgres-secrets
userKey: POSTGRES_USER
passwordKey: POSTGRES_PASSWORD
serviceRef:
name: postgres
port: 5432
databaseName: sonataflow
databaseSchema: jobs-service
---
In this example we’re using a PostgreSQL service located in the same namespace where the Jobs Service is deployed, pointing to the default PostgreSQL port of 5432
and referencing the database sonataflow
and schema jobs-service
. By default, when the Jobs Service pod starts it will recreate the schema in the given location
if it doesn’t exist.
The values of POSTGRES_USER
and POSTGRES_PASSWORD
point to the keys in the secret that contains the PostgreSQL credentials.
---
$>kubectl describe secrets postgres-secrets
Name: postgres-secrets
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data \\\\==== PGDATA: 28 bytes POSTGRES_DATABASE: 10 bytes POSTGRES_PASSWORD: 10 bytes POSTGRES_USER: 10 bytes ---
Putting everything together will render an output similar to this one below:
---
$>kubectl get pod -w -n sonataflow
NAME READY STATUS RESTARTS AGE
postgres-64c9ddb6bc-v96qb 1/1 Running 0 14m
sonataflow-platform-jobs-service-79bb6bc67f-gvcg4 1/1 Running 0 1m
---
Conclusion
The Operator extends its scope to manage the lifecycle of the Jobs Service instance, thus removing the burden on the users and allowing them to focus on the
implementation of the Sonataworkflows. It takes care of managing the Jobs Service deployment, dynamically configures its application properties
when the Data Index service is also available and managed by the Operator, and finally inject application properties in `SonataFlow
workflows in the target
namespace when the Jobs Service is available.
Found an issue?
If you find an issue or any misleading information, please feel free to report it here. We really appreciate it!