最近常常须要建立一些S3 Bucket用于备份。每一个新建的Bucket都应该配置lifecycle,自动删除旧的数据,以便节约空间和开支。ide
豆子写了一个简单的Lambda函数来自动实现。每次当咱们建立一个Bucket的时候,他会调用对应的API,Cloudtrail监测到这个事件后,会发送给Cloudwatch, 而后Cloudwatch会自动调用个人函数来建立lifecycle policy。函数
下面是简单的截图说明。日志
建立一个新的Cloudwatch Rulecode
对应的Lambda函数blog
他默认的IAM已经有权限访问Cloudwatch, 我新建了一个S3的Policy,而后分配给他的IAM role,这样这个lambda函数能够访问Cloudwatch和S3 的权限。事件
下面是Python代码get
import logging import boto3 from botocore.exceptions import ClientError lifecycle_config_settings = { 'Rules': [ {'ID': 'Delete Rule', 'Filter': {'Prefix': ''}, 'Status': 'Enabled', 'Expiration': { 'Days':100 }} ]} def put_bucket_lifecycle_configuration(bucket_name, lifecycle_config): """Set the lifecycle configuration of an Amazon S3 bucket :param bucket_name: string :param lifecycle_config: dict of lifecycle configuration settings :return: True if lifecycle configuration was set, otherwise False """ # Set the configuration s3 = boto3.client('s3') try: s3.put_bucket_lifecycle_configuration(Bucket=bucket_name, LifecycleConfiguration=lifecycle_config) except ClientError as e: return False return True def lambda_handler111(event, context): # TODO implement test_bucket_name = event.get('detail').get('requestParameters').get('bucketName') print(event) print(event.get('detail').get('requestParameters').get('bucketName')) success = put_bucket_lifecycle_configuration(test_bucket_name,lifecycle_config_settings) if success: # logging.info('The lifecycle configuration was set for {test_bucket_name}') print('The lifecycle configuration was set for {test_bucket_name}')
实际运行的效果,但我建立了一个新的Bucket的时候,他会自动调用这个函数,添加policy。string
下面是Cloudwatch的日志it
这个是新建的Bucket的lifecycle policy自动化