Digital Distributed Containers (DDC)

A Digital Distributed Container subscription (Single and/or Multi Service Container) is required.

The Digital Distributed Container functionality allows to create stand alone container containing either a single or multiple services. Those containers can then be deployed on a container-based infrastructure and scaled according to the organization needs.

Single Service Container

Micro-service pattern: An individual service is exported as a container that can be deployed on your organization’s infrastructure. Licensed using a Single Service Container Tokens (SSCT)

Multi Service Container

Bundled-service pattern: Multiple services are selected and exported as a container that can be deployed on your organization’s infrastructure. Licensed using a Multi Service Container Tokens (MSCT)

Each container requires its own license token to execute. A token of the proper type (SSCT or MSCT) is required for each deployment. A deployment is defined as a container image being deployed in a single location. This means that locally scaling of a given container in the same location can be done with a single token, but geographical scaling requires a token per location.

Creating a Digital Distributed Container

In order to be able to build containers, the administrator must configure the base image for the containers.

Single Service Container

Single service containers are automatically created when a service is published to the Service Library from the modelers.

Each execution environment can be configured to build the containers either locally or push them to a remote registry.

Multi Service Container

Multi service containers are built on demand from the Service Library by selecting the services to include in the container and pushing the container image to a remote registry.

Securing a Digital Distributed Container

There are two aspects to consider when deploying a container, its transport layer security and its user provider configuration.

Transport Layer

The containers expose an HTTP port to access its user interface and REST API on port 8181. This web interface should be secured using a Transpoert Layer Security (TLS) most communly referred to as HTTPS. There are two main strategies to secure the container: using an encrypted ingress or making the exposed port HTTPS.

Encrypted ingress

A common way of dealing with transport layer encryption when using a Kubernetes envirionment is to rely on ingress configured with automatic HTTPS certificate manager (such as Let’sEncrypt) to secure the ingress.

Encrypting the exposed port

It is also possible to let the Digital Distributed Container use a provided certificate and encryption key to transform the HTTP port 8181 into an HTTPS port 8181.

Mounting an HTTPS certificate file at /data/https/tls.crt (PEM encoded) and a private key file at /data/https/tls.key (PEM encoded) automatically encrypt the HTTP port with the HTTPS certificate using the private key.

The following openssl command can be used to generate a self signed sample certificate and key.

  openssl req -newkey rsa:4096 \
          -x509 \
          -sha256 \
          -days 3650 \
          -nodes \
          -subj "/CN=myorg.com" \
          -out tls.crt \
          -keyout tls.key
Trisotech recommend to never expose a Digital Distributed Container over an unencrypted transport layer.

Importing self signed certificates

If you are trying to communicate with services that are signed using self-signed certificates, you will need to import them in the Digital Distributed Container Trust store.

To import a self-signed certificate, copy the crt certificate files (must have the .crt extension) into the /data/des/ca path.

A restart of the DDC (pod) is required for any certificate change to be considered.

User Provider

The container is secured by default. Users and services must authenticate before they are allowed to use the container. Containers built from within Digital Enterprise Suite (DES) are automatically configured to use that DES instance as its authorization server. No additional configuration is required aside from network connectivity between the deployed containers and the DES instance.

Important security aspects that should be taken into account when building and running digital distributed containers:

  1. Containers are build for in a given execution environment e.g. prod

  2. Security restrictions from the execution environment are included in the container

  3. Access rights are enforced based on execution environment settings

Users that have access to DES will also have access to container’s services as long as they are granted roles within the environment the container was built for.

Security of the Digital Distributed Containers is based on the OpenID Connect standard to allow easy configuration with external OpenID Connect providers as an alternative to the Digital Enterprise Suite.

When using external OpenID Connect provider not all security features are available. Information about the groups that a user is member of are not available, execution environment access is not synced at runtime thus might result in users not being granted to perform certain operations.

Disabling security

Security can be disabled to be able to run containers in open mode without require authentification. This could be required if there is no connectivity between the container and the DES or if you want to implemnet an external authorization mechanism.

Setting the SSO_METHOD environment variable to the none value when starting the container runs it in open mode.

Using an external OpenID Connect provider

To be able to use external OpenID Connect provider, the following environment variables must be provided when starting the container

  • OIDC_ISSUER - url to the provider that acts as issuer and also the starting point of discovery of OpenID Connect configuration

  • OIDC_CLIENT_ID - client id of the application created in the Open ID Connect provider

  • OIDC_CLIENT_SECRET - corresponding client secret created in the Open ID Connect provider. Only required when the Open ID Connect provider does not allow to use PKCE

If group information is required, then client application on the Open ID Connect provider should be configured to return group names as a groups claim in the ID token.

Groups for task assignment

Containers do not retain user group membership and require an email address for the group to properly use the email notification channel with a group defined as performer.

Service storage

For services that require state persistence (long-running workflow and cases), it is possible to either use the file system through a persisted volume mounted under /data/des/pinstances or through a MongoDB connection.

By default, unless the proper environment variables are set, the file system persistence is used. If no persistent volume is mounted under /data/des/pinstances, the state will be deleted between container restart.

When using the file system persistence with multiple container instance running in parallel, the same persistent volume is required to be mounted on each container instance.

When MongoDB is configured (through environment variables), the storage is delegated to a MongoDB service (single or cluster) that can be scaled. Each service writes to its own database. Each data store (if subscribed) is stored in a separated database. Therefore, multiple services can share the same MongoDB service concurrently (either multiple single service containers or multiple multi service containers).

Override service descriptors

For services that communicate with external services e.g. REST apis, triggers etc, there is a need to provide different values so called service descriptors when deploying to various environments. A common case is given container is promoted via different environments (dev → test → prod) and then point it to different environments of the external systems.

Required service descriptors can be extracted from the container by running it with an environment DES_INFO=json set. An example docker run command would look like following:

docker run --rm --env DES_INFO=json container.registry.com:5001/services/bpmn/my-service:latest

This command will print following output:

{
  "prod/bpmn/my-service/1.0": [
    {
      "interfaceId": "_c9e1e003-ec00-47bc-abfe-10df53c1ed46", // 1
      "interfaceName": "Calculate Body Mass Index",
      "id": "6332b6e2-db81-4885-9a28-d20beb42e92e",
      "serviceUrl": "http://localhost:8080/execution", // 2
      "updated": "Mar 15, 2021, 12:17:11 PM",
      "name": "bpmn-test",
      "type": "oauth2", // 3
      "options": {  // 4
        "clientSecret": "to be set",
        "clientId": "to be set",
        "accessToken": "to be set",
        "tokenType": "Bearer",
        "refreshUrl": "http://localhost:8080/oauth2/token"
      }
    }
  ]
}
  1. Service interface that defines communication with external system

  2. URL of the external system

  3. Authentication type

  4. Authentication options, note that when taken out from the container all sensitive information are replaced with to be set

Users can use Service Descriptor Editor (available in Settings product) to modify service descriptors and then export it for deployment.

Service descriptors should be mounted under /data/des/system-identities folder as individual files. These files must follow env_group_artifact_version naming convention. Example could be prod_finance_my-service_1.0.

In case of multi service containers, each service’s service descriptors must be placed in separate file.

To avoid creating files for each service, file names can be reduced to be applied on higher level. Use file name prod_finance to apply to all services that are deployed to prod environment and are with finance group. Similar can be applied for all services in prod environment or all versions of prod_finance_my-service

Managed identities

Managed identities are only available in self hosted installations

Credentials can also be configured as part of the execution environment where DDC is running. That means that they will be provided and there is no need to specify them directly in the service descriptors.

Managed identities are based on cloud provider specific solutions and at the moment following are supported:

  • AWS IAM Roles for Service Accounts (IRSA)

  • Google Cloud Workload Identity Federation

  • Azure Workload Identity

All of the above are very similar in terms of configuration on DDC side, they require:

  • dedicated Kubernetes service account that the DDC runs with

  • managed identity needs to be explicitly enabled via environment variable CLOUD_MANAGED_IDENTITY

Managed identities essentially allow to externally configure access rights for the service account and by that any service (process, decision, case) running in given DDC inherits these rights. At the same time, managed identity does not expose any credentials as they are obtained internally and are based on short lived access tokens.

Before managed identity can be enabled users need to configure the execution environment according to official documentation of the cloud provider

To use managed identities, selected identity must have authentication option useManagedIdentity enabled. This authentication option can be either manually added in the service descriptor file or it can be set via service descriptor editor.

To set it manually add "useManagedIdentity": "true" to the authentication options. More complete example looks like following:

{
  "prod/bpmn/my-service/1.0": [
    {
      "interfaceId": "_c9e1e003-ec00-47bc-abfe-10df53c1ed46",
      "interfaceName": "Google Cloud Storage",
      "id": "6332b6e2-db81-4885-9a28-d20beb42e92e",
      "serviceUrl": "https://www.googleapis.com",
      "updated": "Mar 15, 2024, 12:17:11 PM",
      "name": "bpmn-test",
      "type": "oauth2",
      "options": {
        "useManagedIdentity": "true"
      }
    }
  ]
}

Once cloud provider specific configuration is completed, DDC deployment manifest needs to be modified to use the service account

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    ...
  name: my-ddc-app
spec:
  replicas: 1
  selector:
    matchLabels:
      ...
  template:
    metadata:
      labels:
        ...
    spec:
      serviceAccountName: my-service-account
      containers:
      - env:
          - name: CLOUD_MANAGED_IDENTITY
            value: aws
...
There might be additional changes required in the deployment manifest e.g. in Azure Workflow Identity, pods needs to also have additional label `azure.workload.identity/use: "true" `.

Environment variables

Environment Variable Required Description

LICENSE

Required

The license token for this container.

PERSISTENCE_TYPE

Required for MongoDB Storage

Set to mongo to use the MongoDB service storage or to fs to use default file system persistence. Defaults to fs.

PERSISTENCE_MONGO_CONNECTION

Required for MongoDB Storage

The connection string of MongoDB.

PERSISTENCE_MONGO_USER

Optional for MongoDB Storage

The username to connect to MongoDB. This can also be defined in the connection string.

PERSISTENCE_MONGO_PASSWORD

Optional for MongoDB Storage

The password to connect to MongoDB. This can also be defined in the connection string.

PERSISTENCE_MONGO_DATABASE

Optional for MongoDB Storage

When using a username/password, authorization could be done on a different database than admin.

PERSISTENCE_MONGO_LOCK

Optional for MongoDB Storage

Number of milliseconds to consider a lock still valid on an instance. Defaults to 60 seconds.

PERSISTENCE_MONGO_LOCK_LIMIT

Optional for MongoDB Storage

Maximum time (in milliseconds) to try to acquire a lock. Defaults to 5000.

PERSISTENCE_MONGO_LOCK_WAIT

Optional for MongoDB Storage

Interval (in milliseconds) between trying to acquire locks. Defaults to 100 ms.

PERSISTENCE_FS_LOCK

Optional for File System Storage

Number of milliseconds to consider a lock still valid on an instance. Defaults to 60 seconds.

PERSISTENCE_FS_LOCK_LIMIT

Optional for File System Storage

Maximum time (in milliseconds) to try to acquire a lock. Defaults to 5000.

PERSISTENCE_FS_LOCK_WAIT

Optional for File System Storage

Interval (in milliseconds) between trying to acquire locks. Defaults to 100 ms.

TRIGGER_THREAD_COUNT

Optional configuration for trigger service

Number of threads that will process triggers such as timers, incoming messages (rabbitmq, apache kafka), etc. Defaults to 1.

SMTP_HOST

Required for services sending emails

SMTP hostname.

SMTP_PORT

Required for services sending emails

SMTP port number.

SMTP_TLS

Required for services sending emails

Set to "true" if the SMTP port requires TLS, omit otherwise (or set to "false").

SMTP_USERNAME

Optional for services sending emails

Optional username for the SMTP server if it is required.

SMTP_PASSWORD

Optional for services sending emails

Optional password for the SMTP server if it is required.

EMAIL_FROM

Optional for services sending emails

An email address to use in the FROM field if none was provided when sending an email.

EMAIL_TO

Optional for services sending emails

An email address to use in the TO field if none was provided when sending an email.

TZ

Optional

A timezone for the container. The values can be found in this wikipedia page in the TZ database name column (eg. Canada/Eastern). Defaults to host timezone.

BASE_URL

Optional

Fix a base url (eg: https://ddc.org.com) for the web interface. This is not required unless there is URL rewriting happening in the request. Also note that the X-FORWARDED_PROTO and X-FORWARDED_PORT headers are supported and should be a preferred configuration over setting the BASE_URL.

BASE_PATH

Optional

Fix a base context path for the web interface (eg: /ddc). This is useful if your proxy strategy wants to expose multiple DDC on the same base URL.

DES_INFO

Optional

When set, container will only display information about itself instead of starting (allowed values: json, kubernetes, text). Used to get service descriptors (connectivity with external systems) utilized by the container’s services

VALIDATION_INTERFACES_STRICT

Optional

Controls validation mode of interfaces at startup, accepted values: true (to fail startup in case of validation errors) or false (to only log validation errors), defaults to false

SESSION_TIMEOUT

Optional

Controls the web session timeouts. This value is numeric and in number of minutes, defaults to no timeout

SSO_METHOD

Optional

Allows to disabled security completely (by setting to none) and run the container in open mode.

OIDC_ISSUER

Optional

The OpenID Connect issuer URL - the url that the provider uses to issue tokens.

OIDC_CLIENT_ID

Optional

The client id (also known as application id) of the configuration on OpenID Connect provider side.

OIDC_CLIENT_SECRET

Optional

The client secret of the configuration on OpenID Connect provider side. Recommended is to use PKCE to avoid exposing secret information.

CLOUD_MANAGED_IDENTITY

Optional

Enabling managed identities for service to service integration, allowed values are aws, gcp, azure to enable AWS, Google Cloud, Azure managed identities.