今天要学习的是图片下载,Scrapy用ImagesPipeline类提供一种方便的方式来下载和存储图片;css
(1)首先仍是使用dribbble.com这个网站来爬取数据,先在项目中的dribbble.py文件中根据响应来获取图片的src属性,这样咱们就能够获取到了图片的路径了,这个咱们以前已经学过了;html
(2)而后在items.py文件中根据本身的需求添加字段,这里咱们能够根据需求建立图片地址的字段、标题字段、时间字段等node
import scrapy class XkdDribbbleSpiderItem(scrapy.Item): title = scrapy.Field() image_url = scrapy.Field() date = scrapy.Field()
from .pipelines import ImagePipeline import os # 获取项目根目录 BASE_DIR = os.path.dirname(os.path.abspath(__file__)) ITEM_PIPELINES = { # 'XKD_Dribbble_Spider.pipelines.XkdDribbbleSpiderPipeline': 300, # 当items.py模块yield以后,默认就是下载image_url的页面 'scrapy.pipelines.images.ImagePipeline': 1, } # 获取item中,image_url的地址,而且下载 IMAGES_URLS_FIELD = 'image_url' # 指定图片下载存储的路径 IMAGES_STORE = os.path.join(BASE_DIR, 'images')
import scrapy from urllib import parse from scrapy.http import Request from ..items import XkdDribbbleSpiderItem from datetime import datetime class DribbbleSpider(scrapy.Spider): name = 'dribbble' allowed_domains = ['dribbble.com'] start_urls = ['https://dribbble.com/stories'] def parse(self, response): # 获取a标签的url值 # urls = response.css('h2 a::attr(href)').extract() a_nodes = response.css('header div.teaser a') for a_node in a_nodes: # print(a_node) a_url = a_node.css('::attr(href)').extract()[0] a_image_url = a_node.css('img::attr(src)').extract()[0] yield Request(url=parse.urljoin(response.url, a_url), callback=self.parse_analyse, meta={'a_image_url': a_image_url}) def parse_analyse(self, response): a_image_url = response.meta.get('a_image_url') title = response.css('.post header h1::text').extract()[0] date = response.css('span.date::text').extract_first() date = date.strip() date = datetime.strptime(date, '%b %d, %Y').date() # 构建模型 dri_item = XkdDribbbleSpiderItem() dri_item['a_image_url'] = a_image_url dri_item['title'] = title dri_item['date'] = date yield dri_item
# 导入自定义ImagePipeline须要的库 from scrapy.http import Request from scrapy.utils.python import to_bytes import hashlib from scrapy.pipelines.images import ImagesPipeline from datetime import datetime class XkdDribbbleSpiderPipeline(object): def process_item(self, item, spider): return item class ImagePipeline(ImagesPipeline): def file_path(self, request, response=None, info=None): ## start of deprecation warning block (can be removed in the future) def _warn(): from scrapy.exceptions import ScrapyDeprecationWarning import warnings warnings.warn('ImagesPipeline.image_key(url) and file_key(url) methods are deprecated, ' 'please use file_path(request, response=None, info=None) instead', category=ScrapyDeprecationWarning, stacklevel=1) # check if called from image_key or file_key with url as first argument if not isinstance(request, Request): _warn() url = request else: url = request.url # detect if file_key() or image_key() methods have been overridden if not hasattr(self.file_key, '_base'): _warn() return self.file_key(url) elif not hasattr(self.image_key, '_base'): _warn() return self.image_key(url) ## end of deprecation warning block image_guid = hashlib.sha1(to_bytes(url)).hexdigest() # change to request.url after deprecation # 修改成时间为目录 return '{}/{}.jpg'.format(datetime.now().year,image_guid)
运行代码咱们能看到打印出来的信息,显示的字段信息是根据咱们在蜘蛛文件中构建的模型决定的。而后这些图片就会下载到咱们指定的文件夹中python
Item Pipeline又称之为管道,顾名思义就是对数据的过滤处理,主要做用包括:清理HTML数据、验证爬取数据,检查爬取字段、查重并丢弃重复内容、将爬取结果保存到数据库等;数据库
建立一个项目的时候都会自带pipeline,pipeline的几个核心方法有:dom
open_spider(spider)
:在开启spider的时候触发的,经常使用于初始化操做,如常见开启数据库链接或打开文件;scrapy
close_spider(spider)
:在关闭spider的时候触发的,经常使用于关闭数据库链接;ide
process_item(item, spider)
:item表示被爬取的item,spider 表示爬取该item的spider,每一个item pipeline组件都须要调用该方法,这个方法必须返回一个 Item (或任何继承类)对象, 或是抛出 DropItem 异常,被丢弃的item将不会被以后的pipeline组件所处理;post
from_crawler(cls, crawler)
:是一个类方法,经常使用于从settings.py获取配置信息;学习