celery beat是一个调度程序,它按期启动任务,而后由集群中的可用工做程序节点执行任务。html
默认状况下,条目是从beat_schedule
设置中获取的,但也可使用自定义存储,例如将条目存储在SQL数据库中。python
必须确保一次只有一个调度程序针对一个调度任务运行,不然最终将致使重复的任务。使用集中式方法意味着时间表没必要同步,而且服务能够在不使用锁的状况下运行。redis
要按期调用任务,您必须在Beat时间表列表中添加一个条目数据库
tasks.pyjson
from celery import Celery
from celery.schedules import crontab app = Celery('tasks', broker='pyamqp://celery:celery@192.168.0.12:5672/celery_vhost',backend='redis://localhost:6379/0') #app = Celery('tasks', backend='redis://localhost', broker='pyamqp://') app.conf.update( task_serializer='json', accept_content=['json'], # Ignore other content result_serializer='json', timezone='Asia/Shanghai', enable_utc=True, ) @app.on_after_configure.connect def setup_periodic_tasks(sender, **kwargs): # Calls test('hello') every 10 seconds. sender.add_periodic_task(10.0, test.s('hello'), name='add every 10') # Calls add(2,2) every 30 seconds sender.add_periodic_task(30.0, add.s(2,2), expires=10) # Executes every Monday morning at 7:30 a.m. sender.add_periodic_task( crontab(hour=7, minute=30, day_of_week=1), test.s('Happy Mondays!'), ) @app.task def test(arg): print(arg) @app.task def add(x, y): return x + y
Beat须要将任务的最后运行时间存储在本地数据库文件(默认状况下命名为celerybeat-schedule)中,app
所以它须要访问权限才能在当前目录中进行写操做,或者能够为此文件指定一个自定义位置:ide
celery -A tasks beat -s /var/run/celery/celerybeat-schedule
而后在另外一个终端启用worker ui
celery -A tasks worker -l info
能够看见日志:spa
[2019-10-24 14:45:53,448: INFO/ForkPoolWorker-4] Task tasks.add[e028900c-f2a3-468e-8cb8-4ae72d0e77fe] succeeded in 0.0020012762397527695s: 4
[2019-10-24 14:46:03,370: INFO/MainProcess] Received task: tasks.test[0635b276-19c9-4d76-9941-dbe9e7320a7f]
[2019-10-24 14:46:03,372: WARNING/ForkPoolWorker-6] hello
[2019-10-24 14:46:03,374: INFO/ForkPoolWorker-6] Task tasks.test[0635b276-19c9-4d76-9941-dbe9e7320a7f] succeeded in 0.0021341098472476006s: None
[2019-10-24 14:46:13,371: INFO/MainProcess] Received task: tasks.test[afcfa84c-3a3b-48bf-9191-59ea55b08eea]
[2019-10-24 14:46:13,373: WARNING/ForkPoolWorker-8] hello
[2019-10-24 14:46:13,375: INFO/ForkPoolWorker-8] Task tasks.test[afcfa84c-3a3b-48bf-9191-59ea55b08eea] succeeded in 0.002273786813020706s: None日志
也能够经过启用workers -B选项将beat嵌入到worker中,
若是永远不会运行一个以上的worker节点,这很方便,可是它并不经常使用,所以不建议用于生产环境:
celery -A tasks worker -B -l info