Follow

RBKB-2015-0003 How to switch from a single-tenant scenario to a multi-tenant scenario





How to switch from a single-tenant scenario to a multi-tenant scenario

 

The purpose of this document is to be a guide in switching from a single-tenant scenario where every client has his/her own cluster to a multi-tenant cluster.

 

1 - Configuring AWS Remote

To configure every client or cluster for backup, the keys and the buckets have to be created in the AWS Panel.

 

There are a few steps follow in the AWS Panel:

1. Create buckets to store the data in AWS remote:

In AWS Panel, go to the section Storage & Content Delivery and click on the S3 option:

Now you are viewing the buckets you have in your AWS Service. To create a new bucket, press the Create Bucket button:

Name the bucket and select the region where you want to store the bucket and press the create button:

Remember that is necessary to create a bucket for each client or cluster and another one for the multi-tenant cluster. In order to have standard names in the guide we are going to name the bucket for each client:

bucket-cluster-i

where “i” is an incremental number for the total number of the clients.

Remember that the multi-tenant cluster also needs to have a bucket created in the AWS S3 remote. So, we will call it :

bucket-multitenant-cluster

2. Create a user to access the buckets:

In AWS Panel, go to the section Security & Identity and click on the Identity & Access Management option:

Go to Users and press the Create New Users button:


Enter the username you want and press the create button:

Now you must write down the Access Key ID and the Secret Access Key:

3. Give bucket access and control permissions to the user:

Go to Users and select the user you created before. Then, go to Permissions and press the Attach Policy button:

Select Policy Generator and press the Select button:

Now we need to allow access and modification permissions to each bucket that we created before. To do that, two ARN statements must be created for each bucket we want to give the permissions to. For example, if we have a bucket named rb-bucket-example, the ARN statements we need to create are:

arn:aws:s3:::rb-bucket-example

arn:aws:s3:::rb-bucket-example/*

Once the statements have been added, press the Next Step button:

And apply the policy:

2 - Configuring clusters for backup

The backup segments of the Remote S3 must be configured in every cluster.

In each cluster (single-tenant or multi-tenant cluster) we need to go to Tools -> General Settings and click on “Backup segments (Remote S3)” to reveal the options.


Then we need to fill out the appropriate parameters:

  • Access key, secret key and hostname are the same in all clusters.
  • The Bucket field depends on the cluster being configured because you are linking the cluster itself with the AWS S3 bucket you created before.

 

3 - Getting namespace’s information

It’s necessary to know where in the multi-tenant cluster we are going to store the data of the single-tenant clusters.

 

For each single-tenant cluster it’s necessary to have a namespace created in the multi-tenant cluster. The only information that is mandatory to have for the next steps is the uuid of every namespace created. To obtain the uuid of a namespace go to sensors and edit the namespace desired. Then write down the uuid number:



3 - Making the Backup

The steps must be carried out in the correct order to get the multi-tenant cluster running with all the data obtained from single-tenant clusters.



Doing the steps correctly:

1. Execute in each client cluster: First of all, it’s a good idea (but not mandatory) to know what node of the cluster has the lowest load because it is ideal to execute the next command in the node that has the lowest load:

[root@node2_Client1 ~]# rb_backup_segments.sh -s
The backup will be created on s3://bucket-cluster-1
Free space: 31% (20G)  load average: 0.03, 0.24, 0.17 Would you like to continue? (Y/n) y Backup druid database db-druid-dump.psql [OK] Uploading db-druid-dump.psql.201512171308: [OK] Getting local S3 md5 files info: [OK] Getting remote S3 md5 files info: [OK] Copying rbdata/rb_monitor/2015-12-17T11:00:00.000Z_2015-12-17T12:00:00.000Z/2015-12-17T11:00:00.000Z/0/index.zip: [OK] Copying rbdata/rb_monitor/2015-12-17T11:00:00.000Z_2015-12-17T12:00:00.000Z/2015-12-17T11:00:00.000Z/0/descriptor.json: [OK] Copying rbdata/rb_monitor/2015-12-17T11:00:00.000Z_2015-12-17T12:00:00.000Z/2015-12-17T11:00:00.000Z/0/rule.json: [OK] Copying rbdata/rb_monitor/2015-12-17T10:00:00.000Z_2015-12-17T11:00:00.000Z/2015-12-17T10:00:00.000Z/0/index.zip: [OK] Copying rbdata/rb_monitor/2015-12-17T10:00:00.000Z_2015-12-17T11:00:00.000Z/2015-12-17T10:00:00.000Z/0/descriptor.json: [OK] Copying rbdata/rb_monitor/2015-12-17T10:00:00.000Z_2015-12-17T11:00:00.000Z/2015-12-17T10:00:00.000Z/0/rule.json: [OK] Deleting temporal data /tmp/segment.tmp-201512171308.11516[OK]

 If needed to verify the data stored in the AWS S3 remote:


[root@node2_Client1 ~]# s3cmd ls -c .s3cfg-backup s3://bucket-cluster-1                   >DIR   s3://bucket-cluster-1/segments/

 Remember that this command needs to be executed in every single-tenant cluster to continue to the next step.

Note: In the multi-tenant cluster it is not necessary to perform this step because it does not yet have data.

2. Execute in multi-tenant cluster for every client: For each client or cluster, the following command must be executed in the lowest average load node:

rb_backup_segments.sh -s -r -k bucket-cluster-i -p UUID_of_the_namespace_for_Client_i

For example, to recover the data of the Client1 and assign it to the appropriate namespace in the multi-tenant cluster, execute the following command in the lowest average load node in the multi-tenant cluster:

[root@node3_MTCluster ~]# rb_backup_segments.sh -s -r -p 5385682990828238621 -k bucket-cluster-1
WARNING: Restoring backup from s3://bucket-cluster-1
Would you like to continue? (y/N) y
Getting local S3 md5 files info:                           [  OK  ]
Getting remote S3 md5 files info:                          [  OK  ]
Getting Getting s3://bucket-cluster-1/segments/last/rbdata/rb_monitor/2015-12-17T11:00:00.000Z_2015-12-17T12:00:00.000Z/2015-12-17T11:00:00.000Z/0/index.zip: OK
Copying s3://redborder/rbdata/rb_monitor_5385682990828238621/2015-12-17T11:00:00.000Z_2015-12-17T12:00:00.000Z/2015-12-17T11:00:00.000Z/0/index.zip: OK
Getting s3://bucket-cluster-1/segments/last/rbdata/rb_monitor/2015-12-17T11:00:00.000Z_2015-12-17T12:00:00.000Z/2015-12-17T11:00:00.000Z/0/descriptor.json: OK
Copying s3://redborder/rbdata/rb_monitor_5385682990828238621/2015-12-17T11:00:00.000Z_2015-12-17T12:00:00.000Z/2015-12-17T11:00:00.000Z/0/descriptor.json: OK
Getting s3://bucket-cluster-1/segments/last/rbdata/rb_monitor/2015-12-17T11:00:00.000Z_2015-12-17T12:00:00.000Z/2015-12-17T11:00:00.000Z/0/rule.json: OK
Inserting rule for rbdata/rb_monitor_5385682990828238621/2015-12-17T11:00:00.000Z_2015-12-17T12:00:00.000Z/2015-12-17T11:00:00.000Z/0/rule.json: OK
Getting s3://bucket-cluster-1/segments/last/rbdata/rb_monitor/2015-12-17T10:00:00.000Z_2015-12-17T11:00:00.000Z/2015-12-17T10:00:00.000Z/0/index.zip: OK
Copying s3://redborder/rbdata/rb_monitor_5385682990828238621/2015-12-17T10:00:00.000Z_2015-12-17T11:00:00.000Z/2015-12-17T10:00:00.000Z/0/index.zip: OK
Getting s3://bucket-cluster-1/segments/last/rbdata/rb_monitor/2015-12-17T10:00:00.000Z_2015-12-17T11:00:00.000Z/2015-12-17T10:00:00.000Z/0/descriptor.json: OK
Copying s3://redborder/rbdata/rb_monitor_5385682990828238621/2015-12-17T10:00:00.000Z_2015-12-17T11:00:00.000Z/2015-12-17T10:00:00.000Z/0/descriptor.json: OK
Getting s3://bucket-cluster-1/segments/last/rbdata/rb_monitor/2015-12-17T10:00:00.000Z_2015-12-17T11:00:00.000Z/2015-12-17T10:00:00.000Z/0/rule.json: OK
Inserting rule for rbdata/rb_monitor_5385682990828238621/2015-12-17T10:00:00.000Z_2015-12-17T11:00:00.000Z/2015-12-17T10:00:00.000Z/0/rule.json: OK
Getting s3://bucket-cluster-1/segments/last/rbdata/rb_monitor/2015-12-17T09:00:00.000Z_2015-12-17T10:00:00.000Z/2015-12-17T09:00:00.000Z/0/index.zip: OK
Copying s3://redborder/rbdata/rb_monitor_5385682990828238621/2015-12-17T09:00:00.000Z_2015-12-17T10:00:00.000Z/2015-12-17T09:00:00.000Z/0/index.zip: OK
Getting s3://bucket-cluster-1/segments/last/rbdata/rb_monitor/2015-12-17T09:00:00.000Z_2015-12-17T10:00:00.000Z/2015-12-17T09:00:00.000Z/0/descriptor.json: OK
Copying s3://redborder/rbdata/rb_monitor_5385682990828238621/2015-12-17T09:00:00.000Z_2015-12-17T10:00:00.000Z/2015-12-17T09:00:00.000Z/0/descriptor.json: OK
Getting s3://bucket-cluster-1/segments/last/rbdata/rb_monitor/2015-12-17T09:00:00.000Z_2015-12-17T10:00:00.000Z/2015-12-17T09:00:00.000Z/0/rule.json: OK
Inserting rule for rbdata/rb_monitor_5385682990828238621/2015-12-17T09:00:00.000Z_2015-12-17T10:00:00.000Z/2015-12-17T09:00:00.000Z/0/rule.json: OK

3. Execute in multi-tenant cluster once: Now it’s supposed that all data is in the multi-tenant cluster. So, the last step is to back-up this data to the assigned bucket of the multi-tenant cluster in the AWS S3 Remote. To do this, execute the following command in the lowest average load node of the multi-tenant cluster:

[root@node3_MTCluster ~]# rb_backup_segments.sh -s
The backup will be created on s3://bucket-multitenant-cluster-1
Free space: 95% (200G)  load average: 0.08, 0.16, 0.14
Would you like to continue? (Y/n) y
Backup druid database db-druid-dump.psql                   [  OK  ]
Uploading db-druid-dump.psql.201512171308:                 [  OK  ]
Getting local S3 md5 files info:                           [  OK  ]
Getting remote S3 md5 files info:                          [  OK  ]
Copying rbdata/rb_monitor_5385682990828238621/2015-12-17T11:00:00.000Z_2015-12-17T12:00:00.000Z/2015-12-17T11:00:00.000Z/0/index.zip: OK
Copying rbdata/rb_monitor_5385682990828238621/2015-12-17T11:00:00.000Z_2015-12-17T12:00:00.000Z/2015-12-17T11:00:00.000Z/0/descriptor.json: OK
Copying rbdata/rb_monitor_5385682990828238621/2015-12-17T11:00:00.000Z_2015-12-17T12:00:00.000Z/2015-12-17T11:00:00.000Z/0/rule.json: OK
Copying rbdata/rb_monitor_5385682990828238621/2015-12-17T10:00:00.000Z_2015-12-17T11:00:00.000Z/2015-12-17T10:00:00.000Z/0/index.zip: OK
Copying rbdata/rb_monitor_5385682990828238621/2015-12-17T10:00:00.000Z_2015-12-17T11:00:00.000Z/2015-12-17T10:00:00.000Z/0/descriptor.json: OK
Copying rbdata/rb_monitor_5385682990828238621/2015-12-17T10:00:00.000Z_2015-12-17T11:00:00.000Z/2015-12-17T10:00:00.000Z/0/rule.json: OK
Deleting temporal data /tmp/segment.tmp-201512171308.11516[  OK  ]

3 - Customize the Backup and Troubleshooting

There are multiple options for customizing the data you are moving from a node to the AWS S3 and viceversa. Common problems can be resolved easily.

Specifying the dates of the data backup/restore:

The script rb_backup_segments.sh offers the posibility of giving a parameter that functions similar to grep. So, if you want to specify a year of backup you can execute (e.g. 2015):

rb_backup_segments.sh -s -g /2015

If you want to specify a month (e.g. December):

rb_backup_segments.sh -s -g /2015-12

It also works if you are trying to restore:

rb_backup_segments.sh -s -r -p 5385682990828238621 -k bucket-cluster-1 -g /2015-12

 

What is going wrong with AWS or S3?:

There are multiples possibilities of failures. For example, if you want to look for all buckets and objects created in S3:

s3cmd la

and for AWS remote S3:

s3cmd la -c .s3cfg-backup

 

Looking for data uploaded to AWS S3:

This can be performed executing (for a bucket named bucket-cluster-1):

s3cmd ls -c .s3cfg-backup s3://bucket-cluster-1/segments/last/rbdata/
Have more questions? Submit a request

Comments

Powered by Zendesk