Backup

The Digital Enterprise Suite (DES) support automatically backing up the user data every night to an AWS S3 bucket, a Google Cloud Platform (GCP) Bucket or an Azure Storage Blob. It is also possible to develop a custom backup strategy. The script creates a daily archive. The cloud storage provider is selected by environment variables defined in the des-backup secret.

You can also rely on the infrastructure capabilities to create disk snapshots instead of using this backup functionality.

Automated Backup

Backups can be done automatically using the maintenance script to backup to a cloud provider. Once you have configured the des-backup secret for the right provider, the deployment can be updated to enable backups:

helm upgrade des des-<VERSION>.tgz \
  --namespace des \
  --reuse-values \
  --set backup.enabled=true

Configure AWS S3

Variable

Definition

AWS_ACCESS_KEY_ID

The AWS access key id given on user creation (optional)

AWS_SECRET_ACCESS_KEY

The AWS secret associated to the access key (optional)

AWS_ENDPOINT_URL

The AWS Endpoint URL to be used (optional) mostly used for s3 clones

BUCKET

The name of the s3 bucket

cat << EOF > des-backup.yaml
apiVersion: v1
kind: Secret
metadata:
  name: des-backup
type: Opaque
stringData:
  AWS_ACCESS_KEY_ID: <AWS_ACCESS_KEY_ID>
  AWS_SECRET_ACCESS_KEY: <AWS_SECRET_ACCESS_KEY>
  BUCKET: <BUCKET>
EOF

kubectl apply -f des-backup.yaml -n des

Configure GCE GS (using service account key)

Variable

Definition

GCE_SERVICEACCOUNT

base 64 of the service account json file

BUCKET

The name of the Google Storage bucket

cat << EOF > des-backup.yaml
apiVersion: v1
kind: Secret
metadata:
  name: des-backup
type: Opaque
stringData:
  GCE_SERVICEACCOUNT: <GCE_SERVICEACCOUNT>
  BUCKET: <BUCKET>
EOF

kubectl apply -f des-backup.yaml -n des

GCE GS (using kubernetes service account)

Using a kubernetes service account avoid exposing keys in a secret and offer a more secure way to access Google Cloud Storage. You must have a valid Workload Identity Federation configuration in the cluster. Here are the commands to create the service account and bind the IAM role to the principal:

Variable

Definition

BUCKET

The name of the Google Storage bucket

NAMESPACE

The kubernetes namespace

PROJECT_NUMBER

The Google Cloud project number (gcloud projects list)

PROJECT_ID

The Google Cloud project id (gcloud projects list)

SERVICE_ACCOUNT

The name of the service account (e.g. backup-sa)

IMAGE_PULL_SECRET

The secret used to pull images from the container registry (optional)

kubectl create serviceaccount <SERVICE_ACCOUNT> --namespace des

# [OPTIONAL] Required only if you are using the default service account to login the Container Image Registry
kubectl patch -n des serviceaccount <SERVICE_ACCOUNT> -p '{"imagePullSecrets": [{"name": "<IMAGE_PULL_SECRET>"}]}'

gcloud storage buckets add-iam-policy-binding gs://<BUCKET> \
  --role=roles/storage.objectCreator \
  --member=principal://iam.googleapis.com/projects/<PROJECT_NUMBER>/locations/global/workloadIdentityPools/<PROJECT ID>.svc.id.goog/subject/ns/des/sa/<SERVICE_ACCOUNT> \
  --condition=None

Variable

Definition

NAME

The helm deployment name (helm ls -n de)

CHART

The helm chart file (e.g. des-12.7.0.tgz)

cat << EOF | kubectl apply -f - -n des
apiVersion: v1
kind: ConfigMap
metadata:
  name: des-backup
data:
  BUCKET: <BUCKET>
EOF

helm upgrade <NAME> <CHART> \
  --namespace des \
  --reuse-values \
  --set backup.enabled=true \
  --set backup.serviceAccountName=<SERVICE_ACCOUNT>

Azure Storage Blob

Variable

Definition

AZURE_STORAGE_CONNECTION_STRING

the Azure connection string (https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string)

BUCKET

The name of the azure storage container

The AZURE_STORAGE_CONNECTION_STRING should have DefaultEndpointsProtocol, AccountName, AccountKey and EndpointSuffix defined.
cat << EOF > des-backup.yaml
apiVersion: v1
kind: Secret
metadata:
  name: des-backup
type: Opaque
stringData:
  AZURE_STORAGE_CONNECTION_STRING: <AZURE_STORAGE_CONNECTION_STRING>
  BUCKET: <BUCKET>
EOF

kubectl apply -f des-backup.yaml -n des

Backup to another kubernetes volume claim

The deployment can be updated to enable backups to another volume claim within kubernetes:

helm upgrade des des-<VERSION>.tgz \
  --namespace des \
  --reuse-values \
  --set backup.backupVolumeClaim=<CLAIMNAME>

Manual Backup

To manually export the data from the running Digital Enteprise Suite pod:

kubectl exec -n des <POD> -- tar czf - --xattrs -C /data/des . > data.tar.gz

This will create a data.tar.gz file locally.

Restoring a backup

Assuming a local backup file of data.tar.gz. If you are using a cloud provider backup, download that backup locally.

kubectl exec -n des -i <POD> -- tar xzf - --xattrs -C /data/des --strip 1 < data.tar.gz
A restart is required after a restoration.

MongoDB Backup and Restoration

Trisotech does not provide functionality to backup or restore the MongoDB storage for persistence if it was deployed. The client is responsible to develop his own backup and restore procedures for MongoDB.

Using native solutions for backups

Some cases do not support a certain volume claim to be mounted by multiple pods. The customer becomes responsible to find a solution according to the technology being used. Here are some examples, that we documented to provide potential solutions, but we do not provide any support for them.

firs you will need to disabled our backup scripts

helm upgrade des des-<VERSION>.tgz \
  --namespace des \
  --reuse-values \
  --set backup.enabled=false

EBS

We recommend using AWS Backup for this type of volume.