到目前,咱们完成了SQL Server备份还原专题系列八篇月报分享:三种常见的数据库备份、备份策略的制定、查找备份链、数据库的三种恢复模式与备份之间的关系、利用文件组实现冷热数据隔离备份方案、如何监控备份还原进度、阿里云RDS SQL自动化迁移上云的一种解决方案以及上个月分享的RDS SDK实现数据库迁移上阿里云,本期咱们分享如何将用户线下或者ECS上自建实例级别数据库一键迁移上阿里云RDS SQL Server。python
在咱们上一个月分享的RDS SDK实现数据库迁移上阿里云RDS SQL Server方案中,咱们实现了如何将用户线下或者ECS上自建的SQL Server实例中的一个用户数据库自动化迁移上云到RDS SQL Server,话句话说,它实现的是数据库级别的迁移上云方案,即每次迁移上云用户线下一个数据库。mysql
可是,有的用户可能会遇到这样的场景,个人线下有几十上百SQL Server实例,每一个实例又有几十上百个数据库,总共就有成千上万个数据库迁移上云。若是是数据库级别的迁移上云方案显得力不从心,效率低下。为了解决用户大批量数据库迁移上云RDS for SQL Server,简化上云操做步骤,提升上云效率,实例级别数据库上云RDS SQL Server是咱们迫切须要解决场景。web
因为在前一个月分享的RDS SDK实现数据库迁移上阿里云RDS SQL Server中,咱们已经实现了单个数据库迁移上云方法,所以实现实例级别的迁移上云咱们能够采用以下方案:sql
将用户线下实例上全部的数据库全量备份文件上传到OSS的一个文件夹中数据库
遍历OSS上该文件夹全部的数据库备份文件json
每个备份文件生成一个迁移上云任务app
基于以上的分析,咱们的实现方法须要包含以下六个输入参数,以及这六个输入参数的解析参见下表:this
access_key_id : 阿里云用户 access key id access_key_secret : 阿里云用户access key secret rds_instance_id : RDS SQL实例ID oss_endpoint : OSS Endpoint地址 oss_bucket : OSS Bucket名 directory : 用户数据库备份文件在OSS上文件夹路径,若是是根目录,请传入“/”
参见上一个月的月报分享MSSQL · 最佳实践 · RDS SDK实现数据库迁移上阿里云RDS SQL Server中的准备工做部分。阿里云
在本文,咱们使用python版RDS SDK实现数据库迁移上云RDS SQL Server,固然你也可使用C#版、Java版等其余版本,详细的代码实现以下:url
#!/usr/bin/python # -*- coding: utf-8 -*- """*************************************************************************************** # Script name : RDSSQLCreateMigrateTasksBatchly.py # Author : jianming.wjm@alibaba-inc.com # Create Date : 2018-05-17 19:27 # Language : Python 2.7.10 # Run platform : Mac OS X 10.12.6 # Purpose : This script is for batchly Migration user offline SQL Server databases to alibaba cloud RDS SQL Server. Users' FULL backup files are located on theirselves' OSS Bucket folder already. This script helps users to do migration all offline databases backed-up under the OSS Bucket folder to RDS SQL. We achieve those accomplishments by call alibaba cloud RDS OPENAPI. # Limitation : RDS Edition : Support RDS edition listed below '2012','2012_web','2012_std', '2012_ent', '2012_std_ha', '2012_ent_ha', '2014_web','2014_std', '2014_ent', '2014_std_ha', '2014_ent_ha', '2016_web','2016_std', '2016_ent', '2016_std_ha', '2016_ent_ha' # Preparation : 1. python 2.7.x installing (I'm using 2.7.10) 2. pip install aliyun-python-sdk-rds 3. pip install oss2 # Usage : Help : python RDSSQLCreateMigrateTasksBatchly.py -h Example : python ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory> variables description access_key_id : alibaba cloud user access key id, fg: LTAIKeRvKPRwkaU3 access_key_secret : alibaba cloud user access key secret, fg: BbZ7xhrertQ0dfgMqfAZPByhnp4G2k rds_instance_id : RDS SQL instance ID, fg: rm-2zesz4564ud8s7123 oss_endpoint : OSS Endpoint address, fg: oss-cn-beijing.aliyuncs.com oss_bucket : OSS Bucket name, fg: atp-test-on-ecs directory : Sub folder name under OSS Bucket, fg: Migration/OPENAPIDemo # Output : There two sesction output, one is the input variables and the other is the migration requests and response. *********************Input variables************************************* ************************************************************************ *********************Migration requests********************************** ************************************************************************ # Modify Author : jianming.wjm@alibaba-inc.com # Modify Date : 2018-05-19 21:43 # Function: #************************************************************************************** """ import json import os import sys, getopt import re import oss2 import time from aliyunsdkcore.client import AcsClient from aliyunsdkrds.request.v20140815 import DescribeMigrateTasksForSQLServerRequest from aliyunsdkrds.request.v20140815 import CreateMigrateTaskRequest from aliyunsdkrds.request.v20140815 import DescribeDBInstanceAttributeRequest def main(argv): access_key_id = access_key_secret = rds_instance_id = oss_endpoint = oss_bucket = directory = '' # usage help try: opts, args = getopt.getopt(argv,"hk:s:i:e:b:d:",["access_key_id=", "access_key_secret=", "rds_instance_id=", "oss_endpoint=", "oss_bucket=", "directory="]) except getopt.GetoptError: print ('%s -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>' % (sys.argv[0])) sys.exit(2) for opt, arg in opts: if opt == '-h': print ('%s -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>' % (sys.argv[0])) sys.exit() elif opt in ("-k", "-K", "--access_key_id"): access_key_id = arg elif opt in ("-s", "-S", "--access_key_secret"): access_key_secret = arg elif opt in ("-i", "-I", "--rds_instance_id"): rds_instance_id = arg elif opt in ("-e", "-E", "--oss_endpoint"): oss_endpoint = arg elif opt in ("-b", "-B", "--oss_bucket"): oss_bucket = arg elif opt in ("-d", "-D", "--directory"): if arg.endswith("/"): directory = arg else: directory = str("%s/" % arg) # show the input parameters print ("\n*********************Input variables*************************************\n" \ "access_key_id = %s\naccess_key_secret = %s\nrds_instance_id = %s\noss_endpoint = %s\noss_bucket = %s\ndirectory = %s\n" \ "************************************************************************" % (access_key_id, access_key_secret, rds_instance_id, oss_endpoint, oss_bucket, directory)) # check RDS & OSS region to make sure they are located in the same region. success, rds_details = rds_instnace_details(access_key_id, access_key_secret, rds_instance_id) if not success: print ("%s" % rds_details) sys.exit() rds_db_version, rds_engine, rds_region = rds_details["EngineVersion"], rds_details["Engine"], rds_details["RegionId"] success, oss_details = oss_bucket_details(access_key_id, access_key_secret, oss_endpoint, oss_bucket) if not success: print ("%s" % oss_details) sys.exit() oss_region = oss_details.location # support db version checking. if rds_engine != 'SQLServer' \ or rds_db_version not in [ '2008r2', '2012','2012_web','2012_std', '2012_ent', '2012_std_ha', '2012_ent_ha', '2014_web','2014_std', '2014_ent', '2014_std_ha', '2014_ent_ha', '2016_web','2016_std', '2016_ent', '2016_std_ha', '2016_ent_ha']: print("RDS engine doesn't support, this is only for RDS SQL Server engine.") sys.exit() # RDS & OSS Bucket are not under the same region. if not oss_region.endswith(rds_region): print("RDS & OSS Bucket are not located in the same region.") sys.exit() # RDS & OSS Bucket are in the same region. print ("\n*********************Migration requests**********************************") full_migrate(access_key_id, access_key_secret, rds_instance_id, oss_endpoint, oss_bucket, directory, rds_db_version) print ("************************************************************************") def rds_instnace_details(access_key_id, access_key_secret, rds_instance_id): request = DescribeDBInstanceAttributeRequest.DescribeDBInstanceAttributeRequest() request.set_DBInstanceId(rds_instance_id) success, response = _send_request(access_key_id, access_key_secret, request) if success: if response["Items"]["DBInstanceAttribute"]: # print response["Items"]["DBInstanceAttribute"][0]["EngineVersion"] # print response["Items"]["DBInstanceAttribute"][0]["RegionId"] return True, response["Items"]["DBInstanceAttribute"][0] else: return False, "Couldn't find specify RDS [%s]." % rds_instance_id return False, response
def full_migrate(access_key_id, access_key_secret, rds_instance_id, oss_endpoint, oss_bucket, directory, rds_db_version): """ this supoort full backup files migration. """ # get all backup objects under sub_folder key_parts_list, do = oss_list_objects(access_key_id, access_key_secret, oss_endpoint, oss_bucket, directory), 0 # foreach object for key_parts in key_parts_list: print ("\n--%s. [%s] migrate to your RDS: [%s] and the database name will be: [%s]." % (do, key_parts.file_key, rds_instance_id, key_parts.db_name)) do += 1 # print ("%s" % key_parts.sign_url) request = CreateMigrateTaskRequest.CreateMigrateTaskRequest() request.set_DBInstanceId(rds_instance_id) request.set_DBName(key_parts.db_name) request.set_BackupMode("FULL") request.set_IsOnlineDB(True) if rds_db_version == '2008r2': request.set_DBName(key_parts.db_name.lower()) request.set_OSSUrls(key_parts.sign_url) else: request.set_OSSUrls("") request.set_OssObjectPositions("%s:%s:%s" % (oss_endpoint, oss_bucket, key_parts.file_key)) # OSSEndpoint:OSSBucket:OSSFileKey request.set_CheckDBMode("SyncExecuteDBCheck") success, response = _send_request(access_key_id, access_key_secret, request) if success: print response print ("--I'm sleeping for 2 seconds....") time.sleep(2) else: print ("OPENAPI Response Error !!!!! : %s" % response) """ send request to OPENAPI and get the response details """ def _send_request(access_key_id, access_key_secret, request, region='cn-hangzhou'): request.set_accept_format('json') try: # clt = AcsClient(access_key_id, access_key_secret, 'cn-hangzhou') clt = AcsClient(access_key_id, access_key_secret, region) response_str = clt.do_action_with_exception(request) response_detail = json.loads(response_str) return True, response_detail except Exception as e: return False, e class oss_key_parts(object): """ if the whole object file key looks like blow: Migration/OPENAPIDemo/TestMigration_FULL_20180518153544.bak then : param str file_key: OSS object file key. : param str sub_folder: OSS sub folder name, such as Migration/OPENAPIDemo : param str file_name: OSS object file name, such as TestMigration_FULL_20180518153544.bak : param str db_name: database name, such as 'TestMigration' : param str bak_type: backup type , such as 'FULL' : param str date: backup date time, such as '20180518153544' : param str ext: backup file extendsion, such as 'bak' """ def __init__(self): self.file_key = '' self.sub_folder = '' self.file_name = '' self.db_name = '' self.bak_type = '' self.date = '' self.ext = '' self.sign_url = '' """ parse the OSS file key string into oss key parts and return oss_key_parts object. """ def oss_key_parse(file_key): key_parts = oss_key_parts() try: if file_key.find('/') >= 0: file_key_parts = file_key.rsplit('/', 1) else: file_key_parts = file_key file_key_parts = ['/', file_key] key_parts.file_key = file_key key_parts.sub_folder = file_key_parts[0] key_parts.file_name = file_key_parts[1] key_list = file_key_parts[1].rsplit('_', 2) key_parts.db_name, \ key_parts.bak_type, \ key_parts.date, \ key_parts.ext = key_list[0], \ key_list[1], \ key_list[2].rsplit('.', 1)[0], \ key_list[2].rsplit('.', 1)[1] except Exception, e: pass return key_parts def oss_list_objects(access_key_id, access_key_secret, oss_endpoint, oss_bucket, directory): """ list all OSS objects under the specified sub folder and return the objects list. """ bucket = oss2.Bucket(oss2.Auth(access_key_id, access_key_secret), oss_endpoint, oss_bucket) key_parts_list = [] # OSS Bucket Root if directory == '/': for object_info in oss2.ObjectIterator(bucket, delimiter='/'): if not object_info.is_prefix(): key_part = oss_key_parse(object_info.key) # get object sign_url key_part.sign_url = bucket.sign_url('GET', object_info.key, 24 * 3600) if key_part.ext in['bak', 'trn', 'log', 'diff']: key_parts_list.append(key_part) else: print ("Warning!!!!!, [%s] is not backup file, filtered." % (key_part.file_key)) else: for i, object_info in enumerate(oss2.ObjectIterator(bucket, prefix=directory)): # have to the backup files, not folder if not object_info.is_prefix(): if object_info.key.startswith(directory) and object_info.key != directory: # print ("%s" % (object_info.key)) key_part = oss_key_parse(object_info.key) # get object sign_url key_part.sign_url = bucket.sign_url('GET', object_info.key, 24 * 3600) if key_part.ext in['bak', 'trn', 'log', 'diff']: key_parts_list.append(key_part) else: print ("Warning!!!!!, [%s] is not a vaild backup file, filtered." % (key_part.file_key)) if not key_parts_list: print("There is no backup file on OSS Bucket [%s] under [%s] folder, check please." % (oss_bucket, directory)) return key_parts_list def oss_bucket_details(access_key_id, access_key_secret, oss_endpoint, oss_bucket): try: bucket = oss2.Bucket(oss2.Auth(access_key_id, access_key_secret), oss_endpoint, oss_bucket) bucket_info = bucket.get_bucket_info() # print ("bucket name:%s, region: %s" % (bucket_info.name, bucket_info.location)) return True, bucket_info except Exception as e: return False, e if __name__ == '__main__': main(sys.argv[1:])
固然,以上代码,你也能够去下载以上python脚本。
咱们从如下三个方面简要介绍下如何使用实例级别一键迁移上云:
查看Help
一个例子
输出结果
你只须要使用-h来查看脚本的使用方法:
python ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -h ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>
如下是一个具体的例子:
python ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -k LTAIQazXKPRwwErT -s BMkIUhroubQOLpOMqfA09IKlqp4G2k -i rm-2zesz5774ud8s71i5 -e oss-cn-beijing.aliyuncs.com -b atp-test-on-ecs -d Migration/OPENAPIDemo
执行以上命令之后的结果输出,分为两个部分:
第一部分输入参数:展现全部你的输入参数,以便查询输入错误
第二部分提示信息:告诉你,哪个备份文件会被迁移到哪一个实例的哪个数据库 以下的一个实例的输出信息:
*********************Input variables************************************* access_key_id = LTAIQazXKPRwwErT access_key_secret = BMkIUhroubQOLpOMqfA09IKlqp4G2k rds_instance_id = rm-2zesz5774ud8s71i5 oss_endpoint = oss-cn-beijing.aliyuncs.com oss_bucket = atp-test-on-ecs directory = Migration/OPENAPIDemo/ ************************************************************************ *********************Migration requests********************************** --0. [Migration/OPENAPIDemo/TestCollation_FULL_20180523225534.bak] migrate to your RDS: [rm-2zesz5774ud8s71i5] and the database name will be: [TestCollation]. {u'DBInstanceId': u'rm-2zesz5774ud8s71i5', u'BackupMode': u'FULL', u'MigrateTaskId': u'106121', u'RequestId': u'67E0DD7F-7219-4F67-AAE7-B27273921303', u'TaskId': u'68244691', u'DBName': u'TestCollation'} --I'm sleeping for 2 seconds.... --1. [Migration/OPENAPIDemo/TestMigration_FULL_20180523225534.bak] migrate to your RDS: [rm-2zesz5774ud8s71i5] and the database name will be: [TestMigration]. {u'DBInstanceId': u'rm-2zesz5774ud8s71i5', u'BackupMode': u'FULL', u'MigrateTaskId': u'106122', u'RequestId': u'0916CD14-861B-4BF7-A68A-409E3996B0D3', u'TaskId': u'68244695', u'DBName': u'TestMigration'} --I'm sleeping for 2 seconds.... --2. [Migration/OPENAPIDemo/testdb_FULL_20180523225534.bak] migrate to your RDS: [rm-2zesz5774ud8s71i5] and the database name will be: [testdb]. {u'DBInstanceId': u'rm-2zesz5774ud8s71i5', u'BackupMode': u'FULL', u'MigrateTaskId': u'106123', u'RequestId': u'5835B154-2EE3-4059-BFC4-6F798CDCE546', u'TaskId': u'68244699', u'DBName': u'testdb'} --I'm sleeping for 2 seconds.... ************************************************************************
利用本篇文章,咱们能够轻松实现用户线下或者ECS自建的SQL Server实例级别数据库一键迁移上云,以此来极大的提升迁移上云效率,简化操做,大大提高了用户迁移上云体验。