AWS Deployment using S3
Warning
Note
- If user choose
backup_config
ass3
inconfig.toml,
backup is already configured during deployment, the below steps are not required. If we have kept thebackup_config
blank, then configuration needs to be configure manually.
Overview
To Communicate with Amazon S3 we need a IAM Role with required policy.
Attach the IAM Role to the All the Opensearch Node and Frontend Node.
Note
Configuration in Provision host
- Create a toml say,
automate.toml
.
Refer to the content for the automate.toml
file below:
[global.v1]
[global.v1.external.opensearch.backup]
enable = true
location = "s3"
[global.v1.external.opensearch.backup.s3]
# bucket (required): The name of the bucket
bucket = "bucket-name"
# base_path (optional): The path within the bucket where backups should be stored
# If base_path is not set, backups will be stored at the root of the bucket.
base_path = "opensearch"
# name of an s3 client configuration you create in your opensearch.yml
# see https://www.open.co/guide/en/opensearch/plugins/current/repository-s3-client.html
# for full documentation on how to configure client settings on your
# OpenSearch nodes
client = "default"
[global.v1.external.opensearch.backup.s3.settings]
## The meaning of these settings is documented in the S3 Repository Plugin
## documentation. See the following links:
## https://www.open.co/guide/en/opensearch/plugins/current/repository-s3-repository.html
## Backup repo settings
# compress = false
# server_side_encryption = false
# buffer_size = "100mb"
# canned_acl = "private"
# storage_class = "standard"
## Snapshot settings
# max_snapshot_bytes_per_sec = "40mb"
# max_restore_bytes_per_sec = "40mb"
# chunk_size = "null"
## S3 client settings
# read_timeout = "50s"
# max_retries = 3
# use_throttle_retries = true
# protocol = "https"
[global.v1.backups]
location = "s3"
[global.v1.backups.s3.bucket]
# name (required): The name of the bucket
name = "bucket-name"
# endpoint (required): The endpoint for the region the bucket lives in for Automate Version 3.x.y
# endpoint (required): For Automate Version 4.x.y, use this https://s3.amazonaws.com
endpoint = "https://s3.amazonaws.com"
# base_path (optional): The path within the bucket where backups should be stored
# If base_path is not set, backups will be stored at the root of the bucket.
base_path = "automate"
[global.v1.backups.s3.credentials]
access_key = "<Your Access Key>"
secret_key = "<Your Seecret Key>"
Execute the command given below to trigger the deployment.
./chef-automate config patch automate.toml
Note
Backup and Restore Commands
Backup
To create the backup, by running the backup command from a Chef Automate front-end node. The backup command is as shown below:
chef-automate backup create
Restoring the Backed-up Data from Object Storage
To restore backed-up data of the Chef Automate High Availability (HA) using External AWS S3, follow the steps given below:
Check the status of all Chef Automate and Chef Infra Server front-end nodes by executing the
chef-automate status
command.Shutdown Chef Automate service on all front-end nodes.
Execute
sudo systemctl stop chef-automate
command in all Chef Automate nodesExecute
sudo systemctl stop chef-automate
command in all Chef Infra ServerLog in to the same instance of Chef Automate front-end node from which backup is taken.
Execute the restore command
chef-automate backup restore s3://bucket_name/path/to/backups/BACKUP_ID --skip-preflight --s3-access-key "Access_Key" --s3-secret-key "Secret_Key"
.
Note
After restore command successfully executed, we need to start the service’s on other frontend node. use the below command to start all the service’s
sudo systemctl start chef-automate
Was this page helpful?