Digital Distributed Containers (DDC)

A Digital Distributed Container subscription (Single and/or Multi Service Container) is required.

The Digital Distributed Container functionality allows to create stand alone container containing either a single or multiple services. Those containers can then be deployed on a container-based infrastructure and scaled according to the organization needs.

Single Service Container

Micro-service pattern: An individual service is exported as a container that can be deployed on your organization’s infrastructure. Licensed using a Single Service Container Tokens (SSCT)

Multi Service Container

Bundled-service pattern: Multiple services are selected and exported as a container that can be deployed on your organization’s infrastructure. Licensed using a Multi Service Container Tokens (MSCT)

Each container requires its own license token to execute. A token of the proper type (SSCT or MSCT) is required for each deployment. A deployment is defined as a container image being deployed in a single location. This means that locally scaling of a given container in the same location can be done with a single token, but geographical scaling requires a token per location.

Creating a Digital Distributed Container

In order to be able to build containers, the administrator must configure the base image for the containers.

Single Service Container

Single service containers are automatically created when a service is published to the Service Library from the modelers.

Each execution environment can be configured to build the containers either locally or push them to a remote registry.

Multi Service Container

Multi service containers are built on demand from the Service Library by selecting the services to include in the container and pushing the container image to a remote registry.

Securing a Digital Distributed Container

There are two aspects to consider when deploying a container, its transport layer security and its user provider configuration.

Transport Layer

The containers expose an HTTP port to access its user interface and REST API on port 8181. This web interface should be secured using a Transpoert Layer Security (TLS) most communly referred to as HTTPS. There are two main strategies to secure the container: using an encrypted ingress or making the exposed port HTTPS.

Encrypted ingress

A common way of dealing with transport layer encryption when using a Kubernetes envirionment is to rely on ingress configured with automatic HTTPS certificate manager (such as Let’sEncrypt) to secure the ingress.

Encrypting the exposed port

It is also possible to let the Digital Distributed Container use a provided certificate and encryption key to transform the HTTP port 8181 into an HTTPS port 8181.

Mounting an HTTPS certificate file at /data/https/tls.crt (PEM encoded) and a private key file at /data/https/tls.key (PEM encoded) automatically encrypt the HTTP port with the HTTPS certificate using the private key.

The following openssl command can be used to generate a self signed sample certificate and key.

  openssl req -newkey rsa:4096 \
          -x509 \
          -sha256 \
          -days 3650 \
          -nodes \
          -subj "/" \
          -out tls.crt \
          -keyout tls.key
Trisotech recommend to never expose a Digital Distributed Container over an unencrypted transport layer.

User Provider

The container is secured by default. Users and services must authenticate before they are allowed to use the container. Containers built from within Digital Enterprise Suite (DES) are automatically configured to use that DES instance as its authorization server. No additional configuration is required aside from network connectivity between the deployed containers and the DES instance.

Important security aspects that should be taken into account when building and running digital distributed containers:

  1. Containers are build for in a given execution environment e.g. prod

  2. Security restrictions from the execution environment are included in the container

  3. Access rights are enforced based on execution environment settings

Users that have access to DES will also have access to container’s services as long as they are granted roles within the environment the container was built for.

Security of the Digital Distributed Containers is based on the OpenID Connect standard to allow easy configuration with external OpenID Connect providers as an alternative to the Digital Enterprise Suite.

When using external OpenID Connect provider not all security features are available. Information about the groups that a user is member of are not available, execution environment access is not synced at runtime thus might result in users not being granted to perform certain operations.

Disabling security

Security can be disabled to be able to run containers in open mode without require authentification. This could be required if there is no connectivity between the container and the DES or if you want to implemnet an external authorization mechanism.

Setting the SSO_METHOD environment variable to the none value when starting the container runs it in open mode.

Using an external OpenID Connect provider

To be able to use external OpenID Connect provider, the following environment variables must be provided when starting the container

  • OIDC_ISSUER - url to the provider that acts as issuer and also the starting point of discovery of OpenID Connect configuration

  • OIDC_CLIENT_ID - client id of the application created in the Open ID Connect provider

  • OIDC_CLIENT_SECRET - corresponding client secret created in the Open ID Connect provider. Only required when the Open ID Connect provider does not allow to use PKCE

If group information is required, then client application on the Open ID Connect provider should be configured to return group names as a groups claim in the ID token.

Groups for task assignment

Containers do not retain user group membership and require an email address for the group to properly use the email notification channel with a group defined as performer.

Service storage

For services that require state persistence (long-running workflow and cases), it is possible to either use the file system through a persisted volume mounted under /data/des/pinstances or through a MongoDB connection.

By default, unless the proper environment variables are set, the file system persistence is used. If no persistent volume is mounted under /data/des/pinstances, the state will be deleted between container restart.

When using the file system persistence with multiple container instance running in parallel, the same persistent volume is required to be mounted on each container instance.

When MongoDB is configured (through environment variables), the storage is delegated to a MongoDB service (single or cluster) that can be scaled. Each service writes to its own database. Each data store (if subscribed) is stored in a separated database. Therefore, multiple services can share the same MongoDB service concurrently (either multiple single service containers or multiple multi service containers).

Override service descriptors

For services that communicate with external services e.g. REST apis, triggers etc, there is a need to provide different values so called service descriptors when deploying to various environments. A common case is given container is promoted via different environments (dev → test → prod) and then point it to different environments of the external systems.

Required service descriptors can be extracted from the container by running it with an environment DES_INFO=json set. An example docker run command would look like following:

docker run --rm --env DES_INFO=json

This command will print following output:

  "prod/bpmn/my-service/1.0": [
      "interfaceId": "_c9e1e003-ec00-47bc-abfe-10df53c1ed46", // 1
      "interfaceName": "Calculate Body Mass Index",
      "id": "6332b6e2-db81-4885-9a28-d20beb42e92e",
      "serviceUrl": "http://localhost:8080/execution", // 2
      "updated": "Mar 15, 2021, 12:17:11 PM",
      "name": "bpmn-test",
      "type": "oauth2", // 3
      "options": {  // 4
        "clientSecret": "to be set",
        "clientId": "to be set",
        "accessToken": "to be set",
        "tokenType": "Bearer",
        "refreshUrl": "http://localhost:8080/oauth2/token"
1 Service interface that defines communication with external system
2 URL of the external system
3 Authentication type
4 Authentication options, note that when taken out from the container all sensitive information are replaced with to be set

Users can use Service Descriptor Editor (available in Settings product) to modify service descriptors and then export it for deployment.

Service descriptors should be mounted under /data/des/system-identities folder as individual files. These files must follow env_group_artifact_version naming convention. Example could be prod_finance_my-service_1.0.

In case of multi service containers, each service’s service descriptors must be placed in separate file.

To avoid creating files for each service, file names can be reduced to be applied on higher level. Use file name prod_finance to apply to all services that are deployed to prod environment and are with finance group. Similar can be applied for all services in prod environment or all versions of prod_finance_my-service

Environment variables

Environment Variable Required Description



The license token for this container.


Required for MongoDB Storage

Set to mongo to use the MongoDB service storage or to fs to use default file system persistence. Defaults to fs.


Required for MongoDB Storage

The connection string of MongoDB.


Optional for MongoDB Storage

The username to connect to MongoDB. This can also be defined in the connection string.


Optional for MongoDB Storage

The password to connect to MongoDB. This can also be defined in the connection string.


Optional for MongoDB Storage

When using a username/password, authorization could be done on a different database than admin.


Optional for MongoDB Storage

Number of milliseconds to consider a lock still valid on an instance. Defaults to 60 seconds.


Optional for MongoDB Storage

Maximum time (in milliseconds) to try to acquire a lock. Defaults to 5000.


Optional for MongoDB Storage

Interval (in milliseconds) between trying to acquire locks. Defaults to 100 ms.


Optional for File System Storage

Number of milliseconds to consider a lock still valid on an instance. Defaults to 60 seconds.


Optional for File System Storage

Maximum time (in milliseconds) to try to acquire a lock. Defaults to 5000.


Optional for File System Storage

Interval (in milliseconds) between trying to acquire locks. Defaults to 100 ms.


Optional configuration for trigger service

Number of threads that will process triggers such as timers, incoming messages (rabbitmq, apache kafka), etc. Defaults to 1.


Required for services sending emails

SMTP hostname.


Required for services sending emails

SMTP port number.


Required for services sending emails

Set to "true" if the SMTP port requires TLS, omit otherwise (or set to "false").


Optional for services sending emails

Optional username for the SMTP server if it is required.


Optional for services sending emails

Optional password for the SMTP server if it is required.


Optional for services sending emails

An email address to use in the FROM field if none was provided when sending an email.


Optional for services sending emails

An email address to use in the TO field if none was provided when sending an email.



A timezone for the container. The values can be found in this wikipedia page in the TZ database name column (eg. Canada/Eastern). Defaults to host timezone.



When set, container will only display information about itself instead of starting (allowed values: json, kubernetes, text). Used to get service descriptors (connectivity with external systems) utilized by the container’s services



Controls validation mode of interfaces at startup, accepted values: true (to fail startup in case of validation errors) or false (to only log validation errors), defaults to false



Controls the web session timeouts. This value is numeric and in number of minutes, defaults to no timeout



Allows to disabled security completely (by setting to none) and run the container in open mode.



The OpenID Connect issuer URL - the url that the provider uses to issue tokens.



The client id (also known as application id) of the configuration on OpenID Connect provider side.



The client secret of the configuration on OpenID Connect provider side. Recommended is to use PKCE to avoid exposing secret information.