Temporal Cluster deployment guide
This guide provides a comprehensive overview to deploy and operate a Temporal Cluster in a live environment.
This guide is a work in progress. Some sections may be incomplete. Information may change at any time.
Legacy production deployment information is available here
Visibility store
A VisibilityWhat is Visibility?
The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Cluster.
Learn more store is set up as a part of your Persistence storeWhat is a Temporal Cluster?
A Temporal Cluster is a Temporal Server paired with Persistence and Visibility stores.
Learn more to enable listing and filtering details about Workflow Executions that exist on your Temporal Cluster.
A Visibility store is required in a Temporal Cluster setup because it is used by Temporal Web UI and CLI to pull Workflow Execution data and enables features like batch operations on a group of Workflow Executions.
With the Visibility store, you can use List FiltersWhat is a List Filter?
A List Filter is the SQL-like string that is provided as the parameter to an advanced Visibility List API.
Learn more with Search AttributesWhat is a Search Attribute?
A Search Attribute is an indexed name used in List Filters to filter a list of Workflow Executions that have the Search Attribute in their metadata.
Learn more to list and filter Workflow Executions that you want to review.
Setting up advanced VisibilityWhat is advanced Visibility?
Advanced Visibility, within the Temporal Platform, is the subsystem and APIs that enable the listing, filtering, and sorting of Workflow Executions through an SQL-like query syntax.
Learn more enables access to creating and using multiple custom Search Attributes with your List Filters.
For details, see Search AttributesWhat is a Search Attribute?
A Search Attribute is an indexed name used in List Filters to filter a list of Workflow Executions that have the Search Attribute in their metadata.
Learn more.
Note that if you use MySQL, PostgreSQL, or SQLite as your Visibility store, Temporal Server version 1.20 and later supports advanced Visibility features on MySQL (version 8.0.17 and later), PostgreSQL (version 12 and later) and SQLite (v3.31.0 and later), in addition to Elasticsearch.
To enable advanced Visibility on your SQL databases, ensure that you do the following:
- Upgrade your Temporal Server
How to upgrade the Temporal Server version
If a newer version of the Temporal Server is available, a notification appears in the Temporal Web UI.
Learn more to version 1.20 or later. - Update your database schemas
How to upgrade the Temporal Server version
If a newer version of the Temporal Server is available, a notification appears in the Temporal Web UI.
Learn more for MySQL to version 8.0.17 (or later), PostgreSQL to version 12 (or later), or SQLite to v3.31.0 (or later).
Beginning with Temporal Server v1.21, you can set up a secondary Visibility store in your Temporal Cluster to enable Dual VisibilityWhat is Dual Visibility?
Dual Visibility is a feature that lets you set a secondary Visibility store in your Temporal Cluster to facilitate migrating your Visibility data from one database to another.
Learn more.
This is useful for migrating your Visibility store database.
Supported databases
The following databases are supported as Visibility stores:
- MySQL
How to set up MySQL Visibility store
You can set MySQL (v5.7 and later) as your Visibility store.
Learn more v5.7 and later. Use v8.0.17 (or later) with Temporal Server v1.20 or later for advanced Visibility capabilities. Because standard Visibility is deprecated beginning with Temporal Server v1.21, support for older versions of MySQL will be dropped. - PostgreSQL
How to set up PostgreSQL Visibility store
You can set PostgreSQL as your Visibility store with any other supported Persistence databases.
Learn more v9.6 and later. Use v12 (or later) with Temporal Server v1.20 or later for advanced Visibility capabilities. Because standard Visibility is deprecated beginning with Temporal Server v1.21, support for older versions of PostgreSQL will be dropped. - SQLite
How to set up SQLite Visibility store
You can set SQLite as your Visibility store with any other supported Persistence databases.
Learn more v3.31.0 and later for advanced Visibility capabilities. - Cassandra
How to set up Cassandra Visibility store
You can set Cassandra as your Visibility store with any other supported Persistence databases.
Learn more. Support for Cassandra as a Visibility database is deprecated beginning with Temporal Server v1.21. - Elasticsearch
How to integrate Elasticsearch into a Temporal Cluster
To integrate Elasticsearch with your Temporal Cluster, edit the `persistence` section of your `development.yaml` configuration file and run the index schema setup commands.
Learn more supported versions. We recommend operating a Temporal Cluster with Elasticsearch as your Visibility store for any use case that spawns more than a few Workflow Executions.
You can use any combination of the supported databases for your Persistence and Visibility stores. For updates, check Server release notes.
MySQL
- MySQL v5.7 and later.
- Support for MySQL v5.7 will be deprecated for all Temporal Server versions after v1.20.
- With Temporal Server version 1.20 and later, advanced Visibility is available on MySQL v8.0.17 and later.
You can set MySQL as your Visibility storeWhat is Visibility?
The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Cluster.
Learn more.
Verify supported versionsHow to set up Visibility in a Temporal Cluster
Visibility storage is set up as a part of your Persistence store to enable listing and filtering details about Worklfow Executions that exist on your Temporal Cluster.
Learn more before you proceed.
If using MySQL v8.0.17 or later as your Visibility store with Temporal Server v1.20 and later, any custom Search AttributesWhat is a Search Attribute?
A Search Attribute is an indexed name used in List Filters to filter a list of Workflow Executions that have the Search Attribute in their metadata.
Learn more that you create must be associated with a Namespace in that Cluster.
Persistence configuration
Set your MySQL Visibility store name in the visibilityStore
parameter in your Persistence configuration, and then define the Visibility store configuration under datastores
.
The following example shows how to set a Visibility store mysql-visibility
and define the datastore configuration in your Temporal Cluster configuration YAML.
#...
persistence:
#...
visibilityStore: mysql-visibility
#...
datastores:
default:
#...
mysql-visibility:
sql:
pluginName: "mysql8" # For MySQL v8.0.17 and later. For earlier versions, use "mysql" plugin.
databaseName: "temporal_visibility"
connectAddr: " " # Remote address of this database; for example, 127.0.0.0:3306
connectProtocol: " " # Protocol example: tcp
user: "username_for_auth"
password: "password_for_auth"
maxConns: 2
maxIdleConns: 2
maxConnLifetime: "1h"
#...
For details on the configuration parameters and values, see Cluster configurationTemporal Cluster configuration reference
Much of the behavior of a Temporal Cluster is configured using the development.yaml
file.
Learn more.
To enable advanced Visibility features on your MySQL Visibility store, upgrade to MySQL v8.0.17 or later with Temporal Server v1.20 or later.
See Upgrade ServerHow to upgrade the Temporal Server version
If a newer version of the Temporal Server is available, a notification appears in the Temporal Web UI.
Learn more on how to upgrade your Temporal Server and database schemas.
For example configuration templates, see MySQL Visibility store configuration.
Database schema and setup
Visibility data is stored in a database table called executions_visibility
that must be set up according to the schemas defined (by supported versions):
The following example shows how the auto-setup.sh script sets up your Visibility store.
#...
# set your MySQL environment variables
: "${DBNAME:=temporal}"
: "${VISIBILITY_DBNAME:=temporal_visibility}"
: "${DB_PORT:=}"
: "${MYSQL_SEEDS:=}"
: "${MYSQL_USER:=}"
: "${MYSQL_PWD:=}"
: "${MYSQL_TX_ISOLATION_COMPAT:=false}"
#...
# set connection details
#...
# set up MySQL schema
setup_mysql_schema() {
#...
# use valid schema for the version of the database you want to set up for Visibility
VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/mysql/${MYSQL_VERSION_DIR}/visibility/versioned
if [[ ${SKIP_DB_CREATE} != true ]]; then
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" create
fi
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" setup-schema -v 0.0
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" update-schema -d "${VISIBILITY_SCHEMA_DIR}"
#...
}
Note that the script uses temporal-sql-tool to run the setup.
PostgreSQL
- PostgreSQL v9.6 and later.
- With Temporal Cluster version 1.20 and later, advanced Visibility is available on PostgreSQL v12 and later.
- Support for PostgreSQL v9.6 through v11 will be deprecated for all Temporal Server versions after v1.20; we recommend upgrading to PostgreSQL 12 or later.
You can set PostgreSQL as your Visibility storeWhat is Visibility?
The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Cluster.
Learn more.
Verify supported versionsHow to set up Visibility in a Temporal Cluster
Visibility storage is set up as a part of your Persistence store to enable listing and filtering details about Worklfow Executions that exist on your Temporal Cluster.
Learn more before you proceed.
If using PostgreSQL v12 or later as your Visibility store with Temporal Server v1.20 and later, any custom Search AttributesWhat is a Search Attribute?
A Search Attribute is an indexed name used in List Filters to filter a list of Workflow Executions that have the Search Attribute in their metadata.
Learn more that you create must be associated with a Namespace in that Cluster.
Persistence configuration
Set your PostgreSQL Visibility store name in the visibilityStore
parameter in your Persistence configuration, and then define the Visibility store configuration under datastores
.
The following example shows how to set a Visibility store postgres-visibility
and define the datastore configuration in your Temporal Cluster configuration YAML.
#...
persistence:
#...
visibilityStore: postgres-visibility
#...
datastores:
default:
#...
postgres-visibility:
sql:
pluginName: "postgres12" # For PostgreSQL v12 and later. For earlier versions, use "postgres" plugin.
databaseName: "temporal_visibility"
connectAddr: " " # remote address of this database; for example, 127.0.0.0:5432
connectProtocol: " " # protocol example: tcp
user: "username_for_auth"
password: "password_for_auth"
maxConns: 2
maxIdleConns: 2
maxConnLifetime: "1h"
#...
To enable advanced Visibility features on your PostgreSQL Visibility store, upgrade to PostgreSQL v12 or later with Temporal Server v1.20 or later.
See Upgrade ServerHow to upgrade the Temporal Server version
If a newer version of the Temporal Server is available, a notification appears in the Temporal Web UI.
Learn more for details on how to upgrade your Temporal Server and database schemas.
Database schema and setup
Visibility data is stored in a database table called executions_visibility
that must be set up according to the schemas defined (by supported versions):
The following example shows how the auto-setup.sh script sets up your PostgreSQL Visibility store.
#...
# set your PostgreSQL environment variables
: "${DBNAME:=temporal}"
: "${VISIBILITY_DBNAME:=temporal_visibility}"
: "${DB_PORT:=}"
: "${POSTGRES_SEEDS:=}"
: "${POSTGRES_USER:=}"
: "${POSTGRES_PWD:=}"
#... set connection details
# set up PostgreSQL schema
setup_postgres_schema() {
#...
# use valid schema for the version of the database you want to set up for Visibility
VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/postgresql/${POSTGRES_VERSION_DIR}/visibility/versioned
if [[ ${VISIBILITY_DBNAME} != "${POSTGRES_USER}" && ${SKIP_DB_CREATE} != true ]]; then
temporal-sql-tool --plugin postgres --ep "${POSTGRES_SEEDS}" -u "${POSTGRES_USER}" -p "${DB_PORT}" --db "${VISIBILITY_DBNAME}" create
fi
temporal-sql-tool --plugin postgres --ep "${POSTGRES_SEEDS}" -u "${POSTGRES_USER}" -p "${DB_PORT}" --db "${VISIBILITY_DBNAME}" update-schema -d "${VISIBILITY_SCHEMA_DIR}"
#...
}
Note that the script uses temporal-sql-tool to run the setup.
SQLite
- SQLite v3.31.0 and later.
You can set SQLite as your Visibility storeWhat is Visibility?
The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Cluster.
Learn more.
Verify supported versionsHow to set up Visibility in a Temporal Cluster
Visibility storage is set up as a part of your Persistence store to enable listing and filtering details about Worklfow Executions that exist on your Temporal Cluster.
Learn more before you proceed.
Temporal supports only an in-memory database with SQLite; this means that the database is automatically created when Temporal Server starts and is destroyed when Temporal Server stops.
You can change the configuration to use a file-based database so that it is preserved when Temporal Server stops. However, if you use a file-based SQLite database, upgrading your database schema to enable advanced Visibility features is not supported; in this case, you must delete the database and create it again to upgrade.
If using SQLite v3.31.0 and later as your Visibility store with Temporal Server v1.20 and later, any custom Search AttributesWhat is a Search Attribute?
A Search Attribute is an indexed name used in List Filters to filter a list of Workflow Executions that have the Search Attribute in their metadata.
Learn more that you create must be associated with a Namespace in that Cluster.
Persistence configuration
Set your SQLite Visibility store name in the visibilityStore
parameter in your Persistence configuration, and then define the Visibility store configuration under datastores
.
The following example shows how to set a Visibility store sqlite-visibility
and define the datastore configuration in your Temporal Cluster configuration YAML.
persistence:
# ...
visibilityStore: sqlite-visibility
# ...
datastores:
# ...
sqlite-visibility:
sql:
user: "username_for_auth"
password: "password_for_auth"
pluginName: "sqlite"
databaseName: "default"
connectAddr: "localhost"
connectProtocol: "tcp"
connectAttributes:
mode: "memory"
cache: "private"
maxConns: 1
maxIdleConns: 1
maxConnLifetime: "1h"
tls:
enabled: false
caFile: ""
certFile: ""
keyFile: ""
enableHostVerification: false
serverName: ""
SQLite (v3.31.0 and later) has advanced Visibility enabled by default.
Database schema and setup
Visibility data is stored in a database table called executions_visibility
that must be set up according to the schemas defined (by supported versions) in https://github.com/temporalio/temporal/blob/master/schema/sqlite/v3/visibility/schema.sql.
For an example of setting up the SQLite schema, see Temporalite setup.
Cassandra
- Support for Cassandra as a Visibility database is deprecated beginning with Temporal Server v1.21. For updates, check the Temporal Server release notes.
- We recommend migrating from Cassandra to any of the other supported databases for Visibility.
You can set Cassandra as your Visibility storeWhat is Visibility?
The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Cluster.
Learn more.
Verify supported versions before you proceed.
Advanced Visibility is not supported with Cassandra.
To enable advanced Visibility features, use any of the supported databases, such as MySQL, PostgreSQL, SQLite, or Elasticsearch, as your Visibility store. We recommend using Elasticsearch for any Temporal Cluster setup that handles more than a few Workflow Executions because it supports the request load on the Visibility store and helps optimize performance.
To migrate from Cassandra to a supported SQL database, see Migrating Visibility database.
Persistence configuration
Set your Cassandra Visibility store name in the visibilityStore
parameter in your Persistence configuration, and then define the Visibility store configuration under datastores
.
The following example shows how to set a Visibility store cass-visibility
and define the datastore configuration in your Temporal Cluster configuration YAML.
#...
persistence:
#...
visibilityStore: cass-visibility
#...
datastores:
default:
#...
cass-visibility:
cassandra:
hosts: "127.0.0.1"
keyspace: "temporal_visibility"
#...
Database schema and setup
Visibility data is stored in a database table called executions_visibility
that must be set up according to the schemas defined (by supported versions) in https://github.com/temporalio/temporal/tree/master/schema/cassandra/visibility.
The following example shows how the auto-setup.sh script sets up your Visibility store.
#...
# set your Cassandra environment variables
: "${KEYSPACE:=temporal}"
: "${VISIBILITY_KEYSPACE:=temporal_visibility}"
: "${CASSANDRA_SEEDS:=}"
: "${CASSANDRA_PORT:=9042}"
: "${CASSANDRA_USER:=}"
: "${CASSANDRA_PASSWORD:=}"
: "${CASSANDRA_TLS_ENABLED:=}"
: "${CASSANDRA_CERT:=}"
: "${CASSANDRA_CERT_KEY:=}"
: "${CASSANDRA_CA:=}"
: "${CASSANDRA_REPLICATION_FACTOR:=1}"
#...
# set connection details
#...
# set up Cassandra schema
setup_cassandra_schema() {
#...
# use valid schema for the version of the database you want to set up for Visibility
VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/cassandra/visibility/versioned
if [[ ${SKIP_DB_CREATE} != true ]]; then
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" create -k "${VISIBILITY_KEYSPACE}" --rf "${CASSANDRA_REPLICATION_FACTOR}"
fi
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" -k "${VISIBILITY_KEYSPACE}" setup-schema -v 0.0
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" -k "${VISIBILITY_KEYSPACE}" update-schema -d "${VISIBILITY_SCHEMA_DIR}"
#...
}
Elasticsearch
- Elasticsearch v8 is supported beginning with Temporal Server version 1.18.0.
- Elasticsearch v7.10 is supported beginning with Temporal Server version 1.17.0.
- Elasticsearch v6.8 is supported through Temporal Server version 1.17.x.
- Elasticsearch v6.8 and v7.10 are explicitly supported with AWS Elasticsearch.
You can integrate Elasticsearch with your Temporal Cluster as your Visibility store. We recommend using Elasticsearch for large-scale operations on the Temporal Cluster.
To integrate Elasticsearch with your Temporal Cluster, edit the persistence
section of your development.yaml
configuration file to add Elasticsearch as the visibilityStore
, and run the index schema setup commands.
Persistence configuration
Set your Elasticsearch Visibility store name in the visibilityStore
parameter in your Persistence configuration, and then define the Visibility store configuration under datastores
.
The following example shows how to set a Visibility store named es-visibility
and define the datastore configuration in your Temporal Cluster configuration YAML.
persistence:
...
visibilityStore: es-visibility
datastores:
...
es-visibility: # Define the Elasticsearch datastore connection information under the `es-visibility` key
elasticsearch:
version: "v7"
url:
scheme: "http"
host: "127.0.0.1:9200"
indices:
visibility: temporal_visibility_v1_dev
Index schema and index
The following example shows how the auto-setup.sh script sets up an Elasticsearch Visibility store.
#...
# Elasticsearch
: "${ENABLE_ES:=false}"
: "${ES_SCHEME:=http}"
: "${ES_SEEDS:=}"
: "${ES_PORT:=9200}"
: "${ES_USER:=}"
: "${ES_PWD:=}"
: "${ES_VERSION:=v7}"
: "${ES_VIS_INDEX:=temporal_visibility_v1}"
: "${ES_SEC_VIS_INDEX:=}"
: "${ES_SCHEMA_SETUP_TIMEOUT_IN_SECONDS:=0}"
#...
# Validate your ES environment
#...
# Wait for ES to start
#...
# ES_SERVER is the URL of Elasticsearch server; for example, "http://localhost:9200".
SETTINGS_URL="${ES_SERVER}/_cluster/settings"
SETTINGS_FILE=${TEMPORAL_HOME}/schema/elasticsearch/visibility/cluster_settings_${ES_VERSION}.json
TEMPLATE_URL="${ES_SERVER}/_template/temporal_visibility_v1_template"
SCHEMA_FILE=${TEMPORAL_HOME}/schema/elasticsearch/visibility/index_template_${ES_VERSION}.json
INDEX_URL="${ES_SERVER}/${ES_VIS_INDEX}"
curl --fail --user "${ES_USER}":"${ES_PWD}" -X PUT "${SETTINGS_URL}" -H "Content-Type: application/json" --data-binary "@${SETTINGS_FILE}" --write-out "\n"
curl --fail --user "${ES_USER}":"${ES_PWD}" -X PUT "${TEMPLATE_URL}" -H 'Content-Type: application/json' --data-binary "@${SCHEMA_FILE}" --write-out "\n"
curl --user "${ES_USER}":"${ES_PWD}" -X PUT "${INDEX_URL}" --write-out "\n"
Elasticsearch privileges
Ensure that the following privileges are granted for the Elasticsearch Temporal index:
- Read
- index privileges:
create
,index
,delete
,read
- index privileges:
- Write
- index privileges:
write
- index privileges:
- Custom Search Attributes
- index privileges:
manage
- cluster privileges:
monitor
ormanage
.
- index privileges:
Dual Visibility
- Supported from Temporal Server v1.21 onwards.
To enable Dual VisibilityWhat is Dual Visibility?
Dual Visibility is a feature that lets you set a secondary Visibility store in your Temporal Cluster to facilitate migrating your Visibility data from one database to another.
Learn more, set up a secondary Visibility store with your primary Visibility store, and configure your Temporal Cluster to enable read and/or write operations on the secondary Visibility store.
With Dual Visibility, you can read from only one Visibility store at a time, but can configure your Temporal Cluster to write to primary only, secondary only, or to both primary and secondary stores.
Set up secondary Visibility store
Set the secondary store with the secondaryVisibilityStore
configuration key in your Persistence configuration, and then define the secondary Visibility store configuration under datastores
.
You can configure any of the supported databases as your secondary store.
Examples:
To configure MySQL as a secondary store with Cassandra as your primary store, do the following.
persistence:
visibilityStore: cass-visibility # This is your primary Visibility store
secondaryVisibilityStore: mysql-visibility # This is your secondary Visibility store
datastores:
cass-visibility:
cassandra:
hosts: "127.0.0.1"
keyspace: "temporal_primary_visibility"
mysql-visibility:
sql:
pluginName: "mysql8" # Verify supported versions. Use a version of SQL that supports advanced Visibility.
databaseName: "temporal_secondary_visibility"
connectAddr: "127.0.0.1:3306"
connectProtocol: "tcp"
user: "temporal"
password: "temporal"
To configure Elasticsearch as both your primary and secondary store, use the configuration key elasticsearch.indices.secondary_visibility
, as shown in the following example.
persistence:
visibilityStore: es-visibility
datastores:
es-visibility:
elasticsearch:
version: "v7"
logLevel: "error"
url:
scheme: "http"
host: "127.0.0.1:9200"
indices:
visibility: temporal_visibility_v1
secondary_visibility: temporal_visibility_v1_new
closeIdleConnectionsInterval: 15s
Database schema and setup
The database schema and setup for a secondary store depends on the database you plan to use.
For the Cassandra and MySQL configuration in the previous example, an example setup script would be as follows.
#...
# set your Cassandra environment variables
: "${KEYSPACE:=temporal}"
: "${VISIBILITY_KEYSPACE:=temporal_primary_visibility}"
: "${CASSANDRA_SEEDS:=}"
: "${CASSANDRA_PORT:=9042}"
: "${CASSANDRA_USER:=}"
: "${CASSANDRA_PASSWORD:=}"
: "${CASSANDRA_TLS_ENABLED:=}"
: "${CASSANDRA_CERT:=}"
: "${CASSANDRA_CERT_KEY:=}"
: "${CASSANDRA_CA:=}"
: "${CASSANDRA_REPLICATION_FACTOR:=1}"
#...
# set connection details
#...
# set up Cassandra schema
setup_cassandra_schema() {
#...
# use valid schema for the version of the database you want to set up for Visibility
VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/cassandra/visibility/versioned
if [[ ${SKIP_DB_CREATE} != true ]]; then
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" create -k "${VISIBILITY_KEYSPACE}" --rf "${CASSANDRA_REPLICATION_FACTOR}"
fi
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" -k "${VISIBILITY_KEYSPACE}" setup-schema -v 0.0
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" -k "${VISIBILITY_KEYSPACE}" update-schema -d "${VISIBILITY_SCHEMA_DIR}"
#...
}
#...
# set your MySQL environment variables
: "${DBNAME:=temporal}"
: "${VISIBILITY_DBNAME:=temporal_secondary_visibility}"
: "${DB_PORT:=}"
: "${MYSQL_SEEDS:=}"
: "${MYSQL_USER:=}"
: "${MYSQL_PWD:=}"
: "${MYSQL_TX_ISOLATION_COMPAT:=false}"
#...
# set connection details
#...
# set up MySQL schema
setup_mysql_schema() {
#...
# use valid schema for the version of the database you want to set up for Visibility
VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/mysql/${MYSQL_VERSION_DIR}/visibility/versioned
if [[ ${SKIP_DB_CREATE} != true ]]; then
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" create
fi
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" setup-schema -v 0.0
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" update-schema -d "${VISIBILITY_SCHEMA_DIR}"
#...
}
For Elasticsearch as both primary and secondary Visibility store configuration in the previous example, an example setup script would be as follows.
#...
# Elasticsearch
: "${ENABLE_ES:=false}"
: "${ES_SCHEME:=http}"
: "${ES_SEEDS:=}"
: "${ES_PORT:=9200}"
: "${ES_USER:=}"
: "${ES_PWD:=}"
: "${ES_VERSION:=v7}"
: "${ES_VIS_INDEX:=temporal_visibility_v1_dev}"
: "${ES_SEC_VIS_INDEX:=temporal_visibility_v1_new}"
: "${ES_SCHEMA_SETUP_TIMEOUT_IN_SECONDS:=0}"
#...
# Validate your ES environment
#...
# Wait for ES to start
#...
# Set up Elasticsearch index
setup_es_index() {
ES_SERVER="${ES_SCHEME}://${ES_SEEDS%%,*}:${ES_PORT}"
# ES_SERVER is the URL of Elasticsearch server i.e. "http://localhost:9200".
SETTINGS_URL="${ES_SERVER}/_cluster/settings"
SETTINGS_FILE=${TEMPORAL_HOME}/schema/elasticsearch/visibility/cluster_settings_${ES_VERSION}.json
TEMPLATE_URL="${ES_SERVER}/_template/temporal_visibility_v1_template"
SCHEMA_FILE=${TEMPORAL_HOME}/schema/elasticsearch/visibility/index_template_${ES_VERSION}.json
INDEX_URL="${ES_SERVER}/${ES_VIS_INDEX}"
curl --fail --user "${ES_USER}":"${ES_PWD}" -X PUT "${SETTINGS_URL}" -H "Content-Type: application/json" --data-binary "@${SETTINGS_FILE}" --write-out "\n"
curl --fail --user "${ES_USER}":"${ES_PWD}" -X PUT "${TEMPLATE_URL}" -H 'Content-Type: application/json' --data-binary "@${SCHEMA_FILE}" --write-out "\n"
curl --user "${ES_USER}":"${ES_PWD}" -X PUT "${INDEX_URL}" --write-out "\n"
# Checks for and sets up Elasticsearch as a secondary Visibility store
if [[ ! -z "${ES_SEC_VIS_INDEX}" ]]; then
SEC_INDEX_URL="${ES_SERVER}/${ES_SEC_VIS_INDEX}"
curl --user "${ES_USER}":"${ES_PWD}" -X PUT "${SEC_INDEX_URL}" --write-out "\n"
fi
}
Update Cluster configuration
With the primary and secondary stores set, update the system.secondaryVisibilityWritingMode
and system.enableReadFromSecondaryVisibility
configuration keys in your self-hosted Cluster's dynamic configuration YAML file to enable read and/or write operations to the secondary Visibility store.
For example, to enable write operations to both primary and secondary stores, but disable reading from the secondary store, use the following.
system.secondaryVisibilityWritingMode:
- value: "dual"
constraints: {}
system.enableReadFromSecondaryVisibility:
- value: false
constraints: {}
For details on the configuration options, see:
- Secondary Visibility dynamic configuration reference
Dynamic configuration reference
Dynamic configuration key values can be set to override the default values in a Cluster configuration.
Learn more - Migrating Visibility databases
Migrating Visibility database
- Supported beginning with Temporal Server v1.21.
To migrate your Visibility database, set up a secondary Visibility store to enable Dual VisibilityWhat is Dual Visibility?
Dual Visibility is a feature that lets you set a secondary Visibility store in your Temporal Cluster to facilitate migrating your Visibility data from one database to another.
Learn more, and update the dynamic configuration in your Cluster to update the read and write operations for the Visibility store.
Dual Visibility setup is optional but useful in gradually migrating your Visibility data to another database.
Before you begin, verify supported databases and versions for a Visibility store.
The following steps describe how to migrate your Visibility database.
After you make any changes to your Cluster configurationWhat is Cluster configuration?
Cluster Configuration is the setup and configuration details of your Temporal Cluster, defined using YAML.
Learn more, ensure that you restart your services.
Set up secondary Visibility store
In your Cluster configuration, add a secondary Visibility store
Temporal Cluster configuration reference
Much of the behavior of a Temporal Cluster is configured using thedevelopment.yaml
file.
Learn more to your Visibility setup under the Persistence configuration.Example: To migrate from Cassandra to Elasticsearch, add Elasticsearch as your secondary database and set it up. For details, see secondary Visibility database schema and setup.
persistence:
visibilityStore: cass-visibility
secondaryVisibilityStore: es-visibility
datastores:
cass-visibility:
cassandra:
hosts: "127.0.0.1"
keyspace: "temporal_visibility"
es-visibility:
elasticsearch:
version: "v7"
logLevel: "error"
url:
scheme: "http"
host: "127.0.0.1:9200"
indices:
visibility: temporal_visibility_v1_dev
closeIdleConnectionsInterval: 15sUpdate the dynamic configuration keys on your self-hosted Temporal Cluster to enable write operations to the secondary store and disable read operations. Example:
system.secondaryVisibilityWritingMode:
- value: "dual"
constraints: {}
system.enableReadFromSecondaryVisibility:
- value: false
constraints: {}
At this point, Visibility data is read from the primary store, and all Visibility data is written to both the primary and secondary store. This setting applies only to new Visibility data generated after Dual Visibility is enabled. It does not migrate any existing data in the primary store to the secondary store.
For details on write options to the secondary store, see Secondary Visibility dynamic configuration referenceDynamic configuration reference
Dynamic configuration key values can be set to override the default values in a Cluster configuration.
Learn more.
Run in dual mode
When you enable a secondary store, only new Visibility data is written to both primary and secondary stores. The primary store still holds the Workflow Execution data from before the secondary store was set up.
Running in dual mode lets you plan for closed and open Workflow Executions data from before the secondary store was set up in your self-hosted Temporal Cluster.
Example:
- To manage closed Workflow Executions data, run in dual mode until the Namespace Retention Period is reached. After the Retention Period, Workflow Execution data is removed from the Persistence and Visibility stores. If you want to keep the closed Workflow Executions data after the set Retention Period, you must set up Archival.
- To manage data for all open Workflow Executions, run in dual mode until all the Workflow Executions started before enabling Dual Visibility mode are closed. After the Workflow Executions are closed, verify the Retention Period and set up Archival if you need to keep the data beyond the Retention Period.
You can run your Visibility setup in dual mode for an indefinite period, or until you are ready to deprecate the primary store and move completely to the secondary store without losing data.
Deprecate primary Visibility store
When you are ready to deprecate your primary store, follow these steps.
Update the dynamic configuration YAML to enable read operations from the secondary store. Example:
system.secondaryVisibilityWritingMode:
- value: "dual"
constraints: {}
system.enableReadFromSecondaryVisibility:
- value: true
constraints: {}At this point, Visibility data is read from the secondary store only. Verify whether data on the secondary store is correct.
When the secondary store is vetted and ready to replace your current primary store, change your Cluster configuration to set the secondary store as your primary, and remove the dynamic configuration set in the previous steps. Example:
persistence:
visibilityStore: es-visibility
datastores:
es-visibility:
elasticsearch:
version: "v7"
logLevel: "error"
url:
scheme: "http"
host: "127.0.0.1:9200"
indices:
visibility: temporal_visibility_v1_dev
closeIdleConnectionsInterval: 15s
Custom Search Attributes
To manage your custom Search Attributes on Temporal Cloud, use tcld
.
With Temporal Cloud, you can create and rename custom Search Attributes.
To manage your custom Search Attributes on self-hosted Temporal Clusters, use tctl
. With self-hosted Temporal Cluster, you can create and remove custom Search Attributes.
Note that if you use SQL databasesHow to set up Visibility in a Temporal Cluster
Visibility storage is set up as a part of your Persistence store to enable listing and filtering details about Worklfow Executions that exist on your Temporal Cluster.
Learn more with Temporal Server v1.20 and later, creating a custom Search Attribute creates a mapping with a database field name in the Visibility store custom_search_attributes
table.
Removing a custom Search Attribute removes this mapping with the database field name but does not remove the data.
If you remove a custom Search Attribute and add a new one, the new custom Search Attribute might be mapped to the database field of the one that was recently removed.
This might cause unexpected results when you use the List API to retrieve results using the new custom Search Attribute.
These constraints do not apply if you use Elasticsearch.
Create custom Search Attributes
Add custom Search Attributes to your Visibility store using tctl
for a self-hosted Temporal Cluster and tcld
for Temporal Cloud.
Creating a custom Search Attribute in your Visibility store makes it available to use in your Workflow metadata and List FiltersWhat is a List Filter?
A List Filter is the SQL-like string that is provided as the parameter to an advanced Visibility List API.
Learn more.
On Temporal Cloud
To create custom Search Attributes on Temporal Cloud, use tcld namespace search-attributes add
.
For example, to add a custom Search Attributes "CustomSA" to your Temporal Cloud Namespace "YourNamespace", run the following command.
tcld namespace search-attributes add --namespace YourNamespace --search-attribute "CustomSA"
On self-hosted Temporal Cluster
If you're self-hosting your Temporal Cluster, verify whether your Visibility databaseHow to set up Visibility in a Temporal Cluster
Visibility storage is set up as a part of your Persistence store to enable listing and filtering details about Worklfow Executions that exist on your Temporal Cluster.
Learn more version supports advanced Visibility features.
To create custom Search Attributes in your self-hosted Temporal Cluster Visibility store, use tctl search-attribute create
with --name
and --type
modifiers.
For example, to create a Search Attribute called CustomSA
of type Keyword
, run the following command:
tctl search-attribute create --name CustomSA --type Keyword
Note that if you use a SQL database with advanced Visibility capabilities, you are required to specify a Namespace when creating a custom Search Attribute.
For example: tctl --ns yournamespace search-attribute create --name CustomSA --type Keyword
You can also create multiple custom Search Attributes when you set up your Visibility store.
For example, the auto-setup.sh script that is used to set up your local docker-compose Temporal Cluster creates custom Search Attributes in the Visibility store, as shown in the following code snippet from the script (for SQL databases).
add_custom_search_attributes() {
until temporal operator search-attribute list --namespace "${DEFAULT_NAMESPACE}"; do
echo "Waiting for namespace cache to refresh..."
sleep 1
done
echo "Namespace cache refreshed."
echo "Adding Custom*Field search attributes."
temporal operator search-attribute create --namespace "${DEFAULT_NAMESPACE}" --yes \
--name CustomKeywordField --type Keyword \
--name CustomStringField --type Text \
--name CustomTextField --type Text \
--name CustomIntField --type Int \
--name CustomDatetimeField --type Datetime \
--name CustomDoubleField --type Double \
--name CustomBoolField --type Bool
}
Note that this script has been updated for Temporal Server v1.20, which requires associating every custom Search Attribute with a Namespace when using a SQL database.
For Temporal Server v1.19 and earlier, or if using Elasticsearch for advanced Visibility, you can create custom Search Attributes without a Namespace association, as shown in the following example.
add_custom_search_attributes() {
echo "Adding Custom*Field search attributes."
tctl --auto_confirm admin cluster add-search-attributes \
--name CustomKeywordField --type Keyword \
--name CustomStringField --type Text \
--name CustomTextField --type Text \
--name CustomIntField --type Int \
--name CustomDatetimeField --type Datetime \
--name CustomDoubleField --type Double \
--name CustomBoolField --type Bool
}
When your Visibility store is set up and running, these custom Search Attributes are available to use in your Workflow code.
Remove custom Search Attributes
To remove a Search Attribute key from your self-hosted Temporal Cluster Visibility store, use the command tctl search-attribute remove
.
Removing Search Attributes is not supported on Temporal Cloud.
For example, if using Elasticsearch for advanced Visibility, to remove a custom Search Attribute called CustomSA
of type Keyword use the following command:
tctl search-attribute remove --name CustomSA
With Temporal Server v1.20, if using a SQL database for advanced Visibility, you need to specify the Namespace in your command, as shown in the following command:
tctl --ns yournamespace search-attribute remove --name CustomSA
To check whether the Search Attribute was removed, run tctl search-attribute list
and check the list.
If you're on Temporal Server v1.20 and later, specify the Namespace from which you removed the Search Attribute.
For example, tctl --ns yournamespace search-attribute list
.
Note that if you use SQL databasesHow to set up Visibility in a Temporal Cluster
Visibility storage is set up as a part of your Persistence store to enable listing and filtering details about Worklfow Executions that exist on your Temporal Cluster.
Learn more with Temporal Server v1.20 and later, a new custom Search Attribute is mapped to a database field name in the Visibility store custom_search_attributes
table.
Removing this custom Search Attribute removes the mapping with the database field name but does not remove the data.
If you remove a custom Search Attribute and add a new one, the new custom Search Attribute might be mapped to the database field of the one that was recently removed.
This might cause unexpected results when you use the List API to retrieve results using the new custom Search Attribute.
These constraints do not apply if you use Elasticsearch.
Archival
ArchivalArchival is a feature that automatically backs up Event Histories from Temporal Cluster persistence to a custom blob store after the Closed Workflow Execution retention period is reached.
Learn more is a feature that automatically backs up Workflow Execution Event Histories and Visibility data from Temporal Cluster persistence to a custom blob store.
Set up Archival
ArchivalArchival is a feature that automatically backs up Event Histories from Temporal Cluster persistence to a custom blob store after the Closed Workflow Execution retention period is reached.
Learn more consists of the following elements:
- Configuration: Archival is controlled by the server configuration (i.e. the
config/development.yaml
file). - Provider: Location where the data should be archived. Supported providers are S3, GCloud, and the local file system.
- URI: Specifies which provider should be used. The system uses the URI schema and path to make the determination.
Take the following steps to set up Archival:
- Set up the provider of your choice.
- Configure Archival.
- Create a Namespace that uses a valid URI and has Archival enabled.
Providers
Temporal directly supports several providers:
- Local file system: The filestore archiver is used to archive data in the file system of whatever host the Temporal server is running on. This provider is used mainly for local installations and testing and should not be relied on for production environments.
- Google Cloud: The gcloud archiver is used to connect and archive data with Google Cloud.
- S3: The s3store archiver is used to connect and archive data with S3.
- Custom: If you want to use a provider that is not currently supported, you can create your own archiver
How to create a custom Archiver
To archive data with a given provider, using the Archival feature, Temporal must have a corresponding Archiver component installed.
Learn more to support it.
Make sure that you save the provider's storage location URI in a place where you can reference it later, because it is passed as a parameter when you create a Namespace.
Configuration
Archival configuration is defined in the config/development.yaml
file.
Let's look at an example configuration:
# Cluster level Archival config
archival:
# Event History configuration
history:
# Archival is enabled at the cluster level
state: "enabled"
enableRead: true
# Namespaces can use either the local filestore provider or the Google Cloud provider
provider:
filestore:
fileMode: "0666"
dirMode: "0766"
gstorage:
credentialsPath: "/tmp/gcloud/keyfile.json"
# Default values for a Namespace if none are provided at creation
namespaceDefaults:
# Archival defaults
archival:
# Event History defaults
history:
state: "enabled"
# New Namespaces will default to the local provider
URI: "file:///tmp/temporal_archival/development"
You can disable Archival by setting archival.history.state
and namespaceDefaults.archival.history.state
to "disabled"
.
Example:
archival:
history:
state: "disabled"
namespaceDefaults:
archival:
history:
state: "disabled"
The following table showcases acceptable values for each configuration and what purpose they serve.
Config | Acceptable values | Description |
---|---|---|
archival.history.state | enabled , disabled | Must be enabled to use the Archival feature with any Namespace in the cluster. |
archival.history.enableRead | true , false | Must be true to read from the archived Event History. |
archival.history.provider | Sub provider configs are filestore , gstorage , s3 , or your_custom_provider . | Default config specifies filestore . |
archival.history.provider.filestore.fileMode | File permission string | File permissions of the archived files. We recommend using the default value of "0666" to avoid read/write issues. |
archival.history.provider.filestore.dirMode | File permission string | Directory permissions of the archive directory. We recommend using the default value of "0766" to avoid read/write issues. |
namespaceDefaults.archival.history.state | enabled , disabled | Default state of the Archival feature whenever a new Namespace is created without specifying the Archival state. |
namespaceDefaults.archival.history.URI | Valid URI | Must be a URI of the file store location and match a schema that correlates to a provider. |
Additional resources: Cluster configuration referenceTemporal Cluster configuration reference
Much of the behavior of a Temporal Cluster is configured using the development.yaml
file.
Learn more.
Namespace creation
Although Archival is configured at the cluster level, it operates independently within each Namespace.
If an Archival URI is not specified when a Namespace is created, the Namespace uses the value of defaultNamespace.archival.history.URI
from the config/development.yaml
file.
The Archival URI cannot be changed after the Namespace is created.
Each Namespace supports only a single Archival URI, but each Namespace can use a different URI.
A Namespace can safely switch Archival between enabled
and disabled
states as long as Archival is enabled at the cluster level.
Archival is supported in Global NamespacesWhat is a Global Namespace?
A Global Namespace is a Namespace that exists across Clusters when Multi-Cluster Replication is set up.
Learn more (Namespaces that span multiple clusters).
When Archival is running in a Global Namespace, it first runs on the active cluster; later it runs on the standby cluster. Before archiving, a history check is done to see what has been previously archived.
Test setup
To test Archival locally, start by running a Temporal server:
./temporal-server start
Then register a new Namespace with Archival enabled.
./tctl --ns samples-namespace namespace register --gd false --history_archival_state enabled --retention 3
If the retention period isn't set, it defaults to two days. The minimum retention period is one day. The maximum retention period is 30 days.
Setting the retention period to 0 results in the error A valid retention period is not set on request.
Next, run a sample Workflow such as the helloworld temporal sample.
When execution is finished, Archival occurs.
Retrieve archives
You can retrieve archived Event Histories by copying the workflowId
and runId
of the completed Workflow from the log output and running the following command:
./tctl --ns samples-namespace wf show --wid <workflowId> --rid <runId>
Custom Archiver
To archive data with a given provider, using the ArchivalWhat is Archival?
Archival is a feature that automatically backs up Event Histories from Temporal Cluster persistence to a custom blob store after the Closed Workflow Execution retention period is reached.
Learn more feature, Temporal must have a corresponding Archiver component installed.
The platform does not limit you to the existing providers.
To use a provider that is not currently supported, you can create your own Archiver.
Create a new package
The first step is to create a new package for your implementation in /common/archiver. Create a directory in the archiver folder and arrange the structure to look like the following:
temporal/common/archiver
- filestore/ -- Filestore implementation
- provider/
- provider.go -- Provider of archiver instances
- yourImplementation/
- historyArchiver.go -- HistoryArchiver implementation
- historyArchiver_test.go -- Unit tests for HistoryArchiver
- visibilityArchiver.go -- VisibilityArchiver implementations
- visibilityArchiver_test.go -- Unit tests for VisibilityArchiver
Archiver interfaces
Next, define objects that implement the HistoryArchiver and the VisibilityArchiver interfaces.
The objects should live in historyArchiver.go
and visibilityArchiver.go
, respectively.
Update provider
Update the GetHistoryArchiver
and GetVisibilityArchiver
methods of the archiverProvider
object in the /common/archiver/provider/provider.go file so that it knows how to create an instance of your archiver.
Add configs
Add configs for your archiver to the config/development.yaml
file and then modify the HistoryArchiverProvider and VisibilityArchiverProvider structs in /common/common/config.go
accordingly.
Custom archiver FAQ
If my custom Archive method can automatically be retried by the caller, how can I record and access progress between retries?
Handle this situation by using ArchiverOptions
.
Here is an example:
func(a * Archiver) Archive(ctx context.Context, URI string, request * ArchiveRequest, opts...ArchiveOption) error {
featureCatalog: = GetFeatureCatalog(opts...) // this function is defined in options.go
var progress progress
// Check if the feature for recording progress is enabled.
if featureCatalog.ProgressManager != nil {
if err: = featureCatalog.ProgressManager.LoadProgress(ctx, & prevProgress);
err != nil {
// log some error message and return error if needed.
}
}
// Your archiver implementation...
// Record current progress
if featureCatalog.ProgressManager != nil {
if err: = featureCatalog.ProgressManager.RecordProgress(ctx, progress);
err != nil {
// log some error message and return error if needed.
}
}
}
If my Archive
method encounters an error that is non-retryable, how do I indicate to the caller that it should not retry?
func(a * Archiver) Archive(ctx context.Context, URI string, request * ArchiveRequest, opts...ArchiveOption) error {
featureCatalog: = GetFeatureCatalog(opts...) // this function is defined in options.go
err: = youArchiverImpl()
if nonRetryableErr(err) {
if featureCatalog.NonRetryableError != nil {
return featureCatalog.NonRetryableError() // when the caller gets this error type back it will not retry anymore.
}
}
}
How does my history archiver implementation read history?
The archiver package provides a utility called HistoryIterator which is a wrapper of ExecutionManager.
HistoryIterator
is more simple than the HistoryManager
, which is available in the BootstrapContainer, so archiver implementations can choose to use it when reading Workflow histories.
See the historyIterator.go file for more details.
Use the filestore historyArchiver implementation as an example.
Should my archiver define its own error types?
Each archiver is free to define and return its own errors. However, many common errors that exist between archivers are already defined in common/archiver/constants.go.
Is there a generic query syntax for the visibility archiver?
Currently, no. But this is something we plan to do in the future. As for now, try to make your syntax similar to the one used by our advanced list Workflow API.
Upgrade Server
If a newer version of the Temporal ServerWhat is the Temporal Server?
The Temporal Server is a grouping of four horizontally scalable services.
Learn more is available, a notification appears in the Temporal Web UI.
If you are using a version that is older than 1.0.0, reach out to us at community.temporal.io to ask how to upgrade.
First check to see if an upgrade to the database schema is required for the version you wish to upgrade to. If a database schema upgrade is required, it will be called out directly in the release notes. Some releases require changes to the schema, and some do not. We ensure that any consecutive versions are compatible in terms of database schema upgrades, features, and system behavior; however there is no guarantee that there is compatibility between any two non-consecutive versions.
When upgrading your Temporal Server version, ensure that you upgrade sequentially. For example, when upgrading from v1.n.x, always upgrade to v1.n+1.x (or the next available version) and so on until you get to the required version.
The Temporal Server upgrade updates or rewrites the old version data with the format introduced in the newer version. Because Temporal Server guarantees backward compatibility between two consecutive minor versions, and because older versions of the code are eventually removed from the code base, skipping versions when upgrading might cause older formats to become unrecognizable. If the old format of the data can't be read to be rewritten to the new format, the upgrades fail.
Check the Temporal Server releases and follow these releases in order. You can skip patch versions; use the latest patch of a minor version when upgrading.
Also be aware that each upgrade requires the History Service to load all Shards and update the Shard metadata, so allow approximately 10 minutes on each version for these processes to complete before upgrading to the next version.
Use one of the upgrade tools to upgrade your database schema to be compatible with the Temporal Server version being upgraded to.
If you are using a schema tools version prior to Temporal Server v1.8.0, we strongly recommend never using the "dryrun" (-y
, or --dryrun
) options in any of your schema update commands.
Using this option might lead to potential loss of data, as when using it will create a new database and drop your
existing one.
This flag was removed in the 1.8.0 release.
Upgrade Cassandra schema
If you are using Cassandra for your Cluster's persistence, use the temporal-cassandra-tool
to upgrade both the default Persistence and Visibility schemas.
Example default schema upgrade:
temporal_v1.2.1 $ temporal-cassandra-tool \
--tls \
--tls-ca-file <...> \
--user <cassandra-user> \
--password <cassandra-password> \
--endpoint <cassandra.example.com> \
--keyspace temporal \
--timeout 120 \
update \
--schema-dir ./schema/cassandra/temporal/versioned
Example visibility schema upgrade:
temporal_v1.2.1 $ temporal-cassandra-tool \
--tls \
--tls-ca-file <...> \
--user <cassandra-user> \
--password <cassandra-password> \
--endpoint <cassandra.example.com> \
--keyspace temporal_visibility \
--timeout 120 \
update \
--schema-dir ./schema/cassandra/visibility/versioned
Upgrade PostgreSQL or MySQL schema
If you are using MySQL or PostgreSQL use the temporal-sql-tool
, which works similarly to the temporal-cassandra-tool
.
Refer to this Makefile for context.
PostgreSQL
Example default schema upgrade:
./temporal-sql-tool \
--tls \
--tls-enable-host-verification \
--tls-cert-file <path to your client cert> \
--tls-key-file <path to your client key> \
--tls-ca-file <path to your CA> \
--ep localhost -p 5432 -u temporal -pw temporal --pl postgres --db temporal update-schema -d ./schema/postgresql/v96/temporal/versioned
Example visibility schema upgrade:
./temporal-sql-tool \
--tls \
--tls-enable-host-verification \
--tls-cert-file <path to your client cert> \
--tls-key-file <path to your client key> \
--tls-ca-file <path to your CA> \
--ep localhost -p 5432 -u temporal -pw temporal --pl postgres --db temporal_visibility update-schema -d ./schema/postgresql/v96/visibility/versioned
If you're upgrading PostgreSQL to v12 or later to enable advanced Visibility features with Temporal Server v1.20, upgrade your PostgreSQL version first, and then run temporal-sql-tool
with the postgres12
plugin, as shown in the following example:
./temporal-sql-tool \
--tls \
--tls-enable-host-verification \
--tls-cert-file <path to your client cert> \
--tls-key-file <path to your client key> \
--tls-ca-file <path to your CA> \
--ep localhost -p 5432 -u temporal -pw temporal --pl postgres12 --db temporal_visibility update-schema -d ./schema/postgresql/v12/visibility/versioned
MySQL
Example default schema upgrade:
./temporal-sql-tool \
--tls \
--tls-enable-host-verification \
--tls-cert-file <path to your client cert> \
--tls-key-file <path to your client key> \
--tls-ca-file <path to your CA> \
--ep localhost -p 3036 -u root -pw root --pl mysql --db temporal update-schema -d ./schema/mysql/v57/temporal/versioned/
Example visibility schema upgrade:
./temporal-sql-tool \
--tls \
--tls-enable-host-verification \
--tls-cert-file <path to your client cert> \
--tls-key-file <path to your client key> \
--tls-ca-file <path to your CA> \
--ep localhost -p 3036 -u root -pw root --pl mysql --db temporal_visibility update-schema -d ./schema/mysql/v57/visibility/versioned/
If you're upgrading MySQL to v8.0.17 or later to enable advanced Visibility features with Temporal Server v1.20, upgrade your MySQL version first, and then run temporal-sql-tool
with the mysql8
plugin, as shown in the following example:
./temporal-sql-tool \
--tls \
--tls-enable-host-verification \
--tls-cert-file <path to your client cert> \
--tls-key-file <path to your client key> \
--tls-ca-file <path to your CA> \
--ep localhost -p 5432 -u temporal -pw temporal --pl mysql8 --db temporal_visibility update-schema -d ./schema/mysql/v8/visibility/versioned.
Roll-out technique
We recommend preparing a staging Cluster and then do the following to verify the upgrade is successful:
- Create some simulation load on the staging cluster.
- Upgrade the database schema in the staging cluster.
- Wait and observe for a few minutes to verify that there is no unstable behavior from both the server and the simulation load logic.
- Upgrade the server.
- Now do the same to the live environment cluster.
Health checks
The Frontend Service supports TCP or gRPC health checks on port 7233.
If you use Nomad to manage your containers, the check stanza would look like this for TCP:
service {
check {
type = "tcp"
port = 7233
interval = "10s"
timeout = "2s"
}
or like this for gRPC (requires Consul ≥ 1.0.5
):
service {
check {
type = "grpc"
port = 7233
interval = "10s"
timeout = "2s"
}
Set up Multi-Cluster Replication
The Multi-Cluster ReplicationWhat is Multi-Cluster Replication?
Multi-Cluster Replication is a feature which asynchronously replicates Workflow Executions from active Clusters to other passive Clusters, for backup and state reconstruction.
Learn more feature asynchronously replicates Workflow Execution Event Histories from active Clusters to other passive Clusters, and can be enabled by setting the appropriate values in the clusterMetadata
section of your configuration file.
enableGlobalNamespace
must be set totrue
.failoverVersionIncrement
has to be equal across connected Clusters.initialFailoverVersion
in each Cluster has to assign a different value. No equal value is allowed across connected Clusters.
After the above conditions are satisfied, you can start to configure a multi-cluster setup.
Set up Multi-Cluster Replication prior to v1.14
You can set this up with clusterMetadata
configurationTemporal Cluster configuration reference
Much of the behavior of a Temporal Cluster is configured using the development.yaml
file.
Learn more; however, this is meant to be only a conceptual guide rather than a detailed tutorial.
Please reach out to us if you need to set this up.
For example:
# cluster A
clusterMetadata:
enableGlobalNamespace: false
failoverVersionIncrement: 100
masterClusterName: "clusterA"
currentClusterName: "clusterA"
clusterInformation:
clusterA:
enabled: true
initialFailoverVersion: 1
rpcAddress: "127.0.0.1:7233"
clusterB:
enabled: true
initialFailoverVersion: 2
rpcAddress: "127.0.0.1:8233"
# cluster B
clusterMetadata:
enableGlobalNamespace: false
failoverVersionIncrement: 100
masterClusterName: "clusterA"
currentClusterName: "clusterB"
clusterInformation:
clusterA:
enabled: true
initialFailoverVersion: 1
rpcAddress: "127.0.0.1:7233"
clusterB:
enabled: true
initialFailoverVersion: 2
rpcAddress: "127.0.0.1:8233"
Set up Multi-Cluster Replication in v1.14 and later
You still need to set up local cluster clusterMetadata
configurationTemporal Cluster configuration reference
Much of the behavior of a Temporal Cluster is configured using the development.yaml
file.
Learn more
For example:
# cluster A
clusterMetadata:
enableGlobalNamespace: false
failoverVersionIncrement: 100
masterClusterName: "clusterA"
currentClusterName: "clusterA"
clusterInformation:
clusterA:
enabled: true
initialFailoverVersion: 1
rpcAddress: "127.0.0.1:7233"
# cluster B
clusterMetadata:
enableGlobalNamespace: false
failoverVersionIncrement: 100
masterClusterName: "clusterB"
currentClusterName: "clusterB"
clusterInformation:
clusterB:
enabled: true
initialFailoverVersion: 2
rpcAddress: "127.0.0.1:8233"
Then you can use the tctl admin
tool to add cluster connections. All operations should be executed in both Clusters.
# Add cluster B connection into cluster A
tctl -address 127.0.0.1:7233 admin cluster upsert-remote-cluster --frontend_address "localhost:8233"
# Add cluster A connection into cluster B
tctl -address 127.0.0.1:8233 admin cluster upsert-remote-cluster --frontend_address "localhost:7233"
# Disable connections
tctl -address 127.0.0.1:7233 admin cluster upsert-remote-cluster --frontend_address "localhost:8233" --enable_connection false
tctl -address 127.0.0.1:8233 admin cluster upsert-remote-cluster --frontend_address "localhost:7233" --enable_connection false
# Delete connections
tctl -address 127.0.0.1:7233 admin cluster remove-remote-cluster --cluster "clusterB"
tctl -address 127.0.0.1:8233 admin cluster remove-remote-cluster --cluster "clusterA"