此次让咱们分析scrapy重试机制的源码,学习其中的思想,编写定制化middleware,捕捉爬取失败的URL等信息。
python
Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架。 能够应用在包括数据挖掘,信息处理或存储历史数据等一系列的程序中。shell
其最初是为了 页面抓取 (更确切来讲, 网络抓取 )所设计的, 也能够应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫。api
一张图可看清楚scrapy中数据的流向:网络
简单了解一下各个部分的功能,能够看下面简化版数据流:框架
无论你的主机配置多么吊炸天,仍是网速多么给力,在scrapy的大规模任务中,最终爬取的item数量都不会等于指望爬取的数量,也就是说总有那么一些爬取失败的漏网之鱼,经过分析scrapy的日志,能够知道形成失败的缘由有如下两种状况:dom
以上的无论是exception仍是httperror, scrapy中都有对应的retry机制,在settings.py
文件中咱们能够设置有关重试的参数,等运行遇到异常和错误时候,scrapy就会自动处理这些问题,其中最关键的部分就是重试中间件,下面让咱们看一下scrapy的retry middleware。scrapy
在scrapy项目的middlewares.py
文件中 敲以下代码:ide
from scrapy.downloadermiddlewares.retry import RetryMiddleware
复制代码
按住ctrl键(Mac是command键),鼠标左键点击RetryMiddleware进入该中间件所在的项目文件的位置,也能够经过查看文件的形式找到该该中间件的位置,路径是:源码分析
site-packages/scrapy/downloadermiddlewares/retry.RetryMiddleware
复制代码
源码以下:学习
class RetryMiddleware(object):
# IOError is raised by the HttpCompression middleware when trying to
# decompress an empty response
# 须要重试的异常状态,能够看出,其中有些是上面log中的异常
EXCEPTIONS_TO_RETRY = (defer.TimeoutError, TimeoutError, DNSLookupError,
ConnectionRefusedError, ConnectionDone, ConnectError,
ConnectionLost, TCPTimedOutError, ResponseFailed,
IOError, TunnelError)
def __init__(self, settings):
# 读取 settings.py 中关于重试的配置信息,若是没有配置重试的话,直接跳过
if not settings.getbool('RETRY_ENABLED'):
raise NotConfigured
self.max_retry_times = settings.getint('RETRY_TIMES')
self.retry_http_codes = set(int(x) for x in settings.getlist('RETRY_HTTP_CODES'))
self.priority_adjust = settings.getint('RETRY_PRIORITY_ADJUST')
@classmethod
def from_crawler(cls, crawler):
return cls(crawler.settings)
# 若是response的状态码,是咱们要重试的
def process_response(self, request, response, spider):
if request.meta.get('dont_retry', False):
return response
if response.status in self.retry_http_codes:
reason = response_status_message(response.status)
return self._retry(request, reason, spider) or response
return response
# 出现了须要重试的异常状态,
def process_exception(self, request, exception, spider):
if isinstance(exception, self.EXCEPTIONS_TO_RETRY) \
and not request.meta.get('dont_retry', False):
return self._retry(request, exception, spider)
# 重试操做
def _retry(self, request, reason, spider):
retries = request.meta.get('retry_times', 0) + 1
retry_times = self.max_retry_times
if 'max_retry_times' in request.meta:
retry_times = request.meta['max_retry_times']
stats = spider.crawler.stats
if retries <= retry_times:
logger.debug("Retrying %(request)s (failed %(retries)d times): %(reason)s",
{'request': request, 'retries': retries, 'reason': reason},
extra={'spider': spider})
retryreq = request.copy()
retryreq.meta['retry_times'] = retries
retryreq.dont_filter = True
retryreq.priority = request.priority + self.priority_adjust
if isinstance(reason, Exception):
reason = global_object_name(reason.__class__)
stats.inc_value('retry/count')
stats.inc_value('retry/reason_count/%s' % reason)
return retryreq
else:
stats.inc_value('retry/max_reached')
logger.debug("Gave up retrying %(request)s (failed %(retries)d times): %(reason)s",
{'request': request, 'retries': retries, 'reason': reason},
extra={'spider': spider})
复制代码
查看源码咱们能够发现,对于返回http code的response,该中间件会经过process_response方法来处理,处理办法比较简单,判断response.status是否在retry_http_codes集合中,这个集合是读取的配置文件:
RETRY_ENABLED = True # 默认开启失败重试,通常关闭
RETRY_TIMES = 3 # 失败后重试次数,默认两次
RETRY_HTTP_CODES = [500, 502, 503, 504, 522, 524, 408] # 碰到这些验证码,才开启重试
复制代码
对于httperror的处理也是一样的道理,定义了一个 EXCEPTIONS_TO_RETRY的列表,里面存放全部的异常类型,而后判断传入的异常是否存在于该集合中,若是在就进入retry逻辑,不在就忽略。
了解scrapy如何处理异常后,就能够利用这种思想,写一个middleware,对爬取失败的漏网之鱼进行捕获,方便之后作补爬。
process_response()
和process_exception()
方法进行重写;Talk is cheap, show the code:
class GetFailedUrl(RetryMiddleware):
def __init__(self, settings):
self.max_retry_times = settings.getint('RETRY_TIMES')
self.retry_http_codes = set(int(x) for x in settings.getlist('RETRY_HTTP_CODES'))
self.priority_adjust = settings.getint('RETRY_PRIORITY_ADJUST')
def process_response(self, request, response, spider):
if response.status in self.retry_http_codes:
# 将爬取失败的URL存下来,你也能够存到别的存储
with open(str(spider.name) + ".txt", "a") as f:
f.write(response.url + "\n")
return response
return response
def process_exception(self, request, exception, spider):
# 出现异常的处理
if isinstance(exception, self.EXCEPTIONS_TO_RETRY):
with open(str(spider.name) + ".txt", "a") as f:
f.write(str(request) + "\n")
return None
复制代码
setting.py中添加该中间件:
DOWNLOADER_MIDDLEWARES = {
'myspider.middlewares.TabelogDownloaderMiddleware': 543,
'myspider.middlewares.RandomProxy': 200,
'myspider.middlewares.GetFailedUrl': 220,
}
复制代码
为了测试,咱们故意写错URL,或者将download_delay缩短,就会出现各类异常,可是咱们如今可以捕获它们了: