Workflow embedded execution in Java
This guide uses a standard Java virtual machine and a small set of Maven dependencies to execute a CNCF Serverless Workflow definition. Therefore, it is assumed you are fluent both in Java and Maven.
The workflow definition to be executed can be read from a .json
or .yaml
file or programmatically defined using the SonataFlow fluent API.
-
Install OpenJDK 17+
-
Install Apache Maven 3.9.3.
Hello world (using existing definition file)
The first step is to set up an empty Maven project that includes Workflow executor core dependency.
This guide also uses slf4j dependency to avoid using System.out.println
Let’s assume you already have a workflow definition written in a JSON file in your project root directory. For example, Hello World definition. To execute it, you must write the following main Java class (standard Java imports and package declaration are intentionally skipped for briefness)
import org.kie.kogito.serverless.workflow.executor.StaticWorkflowApplication;
import org.kie.kogito.serverless.workflow.models.JsonNodeModel;
import org.kie.kogito.serverless.workflow.utils.ServerlessWorkflowUtils;
import org.kie.kogito.serverless.workflow.utils.WorkflowFormat;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import io.serverlessworkflow.api.Workflow;
public class DefinitionFileExecutor {
private static final Logger logger = LoggerFactory.getLogger(DefinitionFileExecutor.class);
public static void main(String[] args) throws IOException {
try (Reader reader = new FileReader("hello.sw.json"); (1)
StaticWorkflowApplication application = StaticWorkflowApplication.create()) { (2)
Workflow workflow = ServerlessWorkflowUtils.getWorkflow(reader, WorkflowFormat.JSON); (3)
JsonNodeModel result = application.execute(workflow, Collections.emptyMap()); (4)
logger.info("Workflow execution result is {}", result.getWorkflowdata()); (5)
}
}
}
1 | Reads the workflow file definition from the project root directory |
2 | Creates a static workflow application object. It is done within the try block since the instance is Closeable . This is the reference that allow you to execute workflow definitions. |
3 | Reads the Serverless Workflow Java SDK Workflow object from the file. |
4 | Execute the workflow, passing Workflow reference and no parameters (an empty Map). The result of the workflow execution: process instance id and workflow output model, can be accessed using result variable. |
5 | Prints the workflow model in the configured standard output. |
If you compile and execute this Java class, you will see the following log in your configured standard output:
Workflow execution result is {"greeting":"Hello World","mantra":"Serverless Workflow is awesome!"}
Hello world (using fluent API)
Adding kogito-serverless-workflow-fluent dependency to the Maven setup in the previous section, you can programmatically generate that workflow definition rather than loading it from a file definition by using the fluent API
Therefore, you can modify the previous example to generate the same output when it is executed, but rather than creating a FileReader
that reads the Workflow
object, we create the Workflow
object using Java statements. The resulting modified main method is the following
try (StaticWorkflowApplication application = StaticWorkflowApplication.create()) {
Workflow workflow = workflow("HelloWorld"). (1)
start( (2)
inject( (3)
jsonObject().put("greeting", "Hello World").put("mantra","Serverless Workflow is awesome!"))) (4)
.end() (5)
.build(); (6)
logger.info("Workflow execution result is {}",application.execute(workflow, Collections.emptyMap()).getWorkflowdata()); (7)
}
1 | Creates a workflow which name is HelloWorld |
2 | Indicate that you are going to specify the start state |
3 | A Inject state is the start state |
4 | Inject state accepts static json, therefore this line creates the JSON data |
5 | End the workflow definition |
6 | Build the workflow definition |
7 | Execute and print as in previous example |
Additional fluent examples
You can find additional and commented examples of fluent API usage (including jq expression evaluation and orchestration of rest services) here
Dependencies explanation
Embedded workflow uses a modular approach to keep the number of required dependencies as small as possible. The rationale is to avoid adding something that you will not use to the dependency set. For example, the OpenAPI module is based on a Swagger parser, if you are not going to call any OpenAPI service, it is better to avoid adding Swagger to the dependency set. This means the Core dependency does not include the stuff to use gRPC, OpenAPI, or most custom function types.
This is the list of additional dependencies you might need to add depending on the functionality you are using:
-
REST: Add if you use custom Rest function type.
-
Service: Add if your use custom Service function type.
-
Python: Add if you use custom Python function type. See example.
-
Events: Add if you use Event or Callback state in your workflow. Only Kafka events are supported right now. See Publisher and Subscriber examples.
Persistence support
To enable persistence, you must include the desired SonataFlow persistence add-on as a dependency and set up StaticWorkflowApplication
to use the ProcessInstances
implementation provided by the add-on.
Since, within an embedded environment, you usually do not want to contact an external database, the recommendation is to use the rocksdb embedded database. You do that by adding the rocksdb add-on dependency and adding the following code snippet when you create your StaticWorkflowApplication
object.
StaticWorkflowApplication.create().processInstancesFactory(new RocksDBProcessInstancesFactory(new Options().setCreateIfMissing(true), tempDir.toString()))
See the persistence example.
Found an issue?
If you find an issue or any misleading information, please feel free to report it here. We really appreciate it!